Get 20M+ Full-Text Papers For Less Than $1.50/day. Start a 14-Day Trial for You or Your Team.

Learn More →

About the exact simulation of bivariate (reciprocal) Archimax copulas

About the exact simulation of bivariate (reciprocal) Archimax copulas 1IntroductionWe recall that a bivariate copula is the restriction to [0,1]2{\left[0,1]}^{2}of a joint distribution function of a random vector (X,Y)\left(X,Y)with the property that both XXand YYare uniformly distributed on the interval [0,1]\left[0,1], i.e., FX(x)≔P(X≤x)=x{F}_{X}\left(x):= {\mathbb{P}}\left(X\le x)=xand FY(y)≔P(Y≤y)=y{F}_{Y}(y):= {\mathbb{P}}\left(Y\le y)=yfor 0≤x,y≤10\le x,y\le 1. There exists a lot of research on copulas, since the distribution function of an arbitrary bivariate random vector (X,Y)\left(X,Y)can always be written as C(FX(x),FY(y))C\left({F}_{X}\left(x),{F}_{Y}(y))for some copula CCdue to Sklar’s theorem, so that copulas are an analytic tool to separate dependence and marginals. Similarly, the survival function of (X,Y)\left(X,Y)can always be written as Cˆ(1−FX(x),1−FY(y))\hat{C}\left(1-{F}_{X}\left(x),1-{F}_{Y}(y))for some copula Cˆ\hat{C}, called survival copula. If FX,FY{F}_{X},{F}_{Y}are continuous, the copula and survival copula of (X,Y)\left(X,Y)are even unique, see [5,17] for popular textbooks on the topic.The present article deals with the specific family of bivariate Archimax copulas that have been introduced in [1] with the intention to create a large parametric family of copulas with desirable properties in the context of extreme-value theory. For detailed background on dependence properties we also refer to [6], and inference for Archimax copulas is investigated in [3]. Bivariate Archimax copulas are parameterized in terms of a pair (φ,ℓ)\left(\varphi ,\ell )of a continuous and convex function φ:[0,∞)→[0,1]\varphi :{[}0,\infty )\to \left[0,1]with φ(0)=1\varphi \left(0)=1and limx→∞φ(x)=0{\mathrm{lim}}_{x\to \infty }\varphi \left(x)=0, and a norm ℓ\ell on R2{{\mathbb{R}}}^{2}with normalization ℓ(1,0)=ℓ(0,1)=1\ell \left(1,0)=\ell \left(0,1)=1that is orthant–monotonic, which means that ℓ(x,y)=ℓ(∣x∣,∣y∣)\ell \left(x,y)=\ell \left(| x| ,| y| ). Intuitively, this means that its unit circle is axis-symmetric so that its unit simplex {(x,y)∈[0,1]2:ℓ(x,y)=1}\left\{\left(x,y)\in {\left[0,1]}^{2}:\ell \left(x,y)=1\right\}lies between the ones corresponding to ‖.‖1{\Vert .\Vert }_{1}and ‖.‖∞{\Vert .\Vert }_{\infty }. It is well-known from [1,2] that there exists a random vector (X,Y)\left(X,Y)on the positive orthant (0,∞)2{\left(0,\infty )}^{2}whose survival function is given by P(X>x,Y>y)=φ∘ℓ(x,y){\mathbb{P}}\left(X\gt x,Y\gt y)=\varphi \circ \ell \left(x,y), x,y≥0x,y\ge 0. The unique survival copula of (X,Y)\left(X,Y)is called Archimax copula, given by (1)Cφ,ℓ(x,y)=φ∘ℓ[φ−1(x),φ−1(y)],0≤x,y≤1.{C}_{\varphi ,\ell }\left(x,y)=\varphi \circ \ell {[}{\varphi }^{-1}\left(x),{\varphi }^{-1}(y)],\hspace{1.0em}0\le x,y\le 1.In Section 5, we will further provide an exact simulation algorithm for reciprocal Archimax copulas, which have the algebraic form (2)C∗,ψ,ℓ(x,y)=xyCe−ψ,ℓ(x,y),0≤x,y≤1,{C}_{\ast ,\psi ,\ell }\left(x,y)=\frac{x\hspace{0.33em}y}{{C}_{{e}^{-\psi },\ell }\left(x,y)},\hspace{1.0em}0\le x,y\le 1,where ℓ\ell is the same as above, and ψ:(0,∞)→[0,∞)\psi :\left(0,\infty )\to {[}0,\infty )is a convex function with limx→∞ψ(x)=0{\mathrm{lim}}_{x\to \infty }\psi \left(x)=0(just like φ\varphi ) and limx↘0ψ(x)=∞{\mathrm{lim}}_{x\searrow 0}\psi \left(x)=\infty (unlike φ\varphi ). The nomenclature originates from the algebraic reminiscence with Archimax copulas in (1) and was introduced in [7] for the special case ℓ=‖.‖1\ell ={\Vert .\Vert }_{1}. These arise as copulas of bivariate max-infinitely divisible random vectors whose exponent measure has ℓ\ell -symmetric survival function.The function φ\varphi is called Archimedean generator, since the subfamily of copulas of the form Cφ,ℓ=‖.‖1{C}_{\varphi ,\ell ={\Vert .\Vert }_{1}}coincides with Archimedean copulas, see [16] for background on the matter. Similarly, ψ\psi will be called a generator for the reciprocal Archimax copula, since the special case C∗,ψ,ℓ=‖.‖1{C}_{\ast ,\psi ,\ell ={\Vert .\Vert }_{1}}coincides with bivariate reciprocal Archimedean copulas introduced in [7]. The norm ℓ\ell is called a stable tail dependence function, and the function Aℓ(x)≔ℓ(x,1−x){A}_{\ell }\left(x):= \ell \left(x,1-x)for x∈[0,1]x\in \left[0,1]is called Pickands dependence function. (Note that ℓ\ell is fully determined by Aℓ{A}_{\ell }via ℓ(x,y)=(x+y)Aℓxx+y\ell \left(x,y)=\left(x+y){A}_{\ell }\left(\frac{x}{x+y}\right).) The nomenclature originates from the field of multivariate extreme-value theory, since the subfamily of copulas of the form Cφ(x)=exp(−x),ℓ{C}_{\varphi \left(x)=\exp \left(-x),\ell }coincides with bivariate extreme-value copulas, see [8] for background on the matter.Regarding the strength of association between the components XXand YYthere are two different regimes. If the function φ\varphi happens to be the Laplace transform of a positive random variable MM, we have non-negative association, while we can have negative association in the complementary case, when φ\varphi is a so-called strict Archimedean generator (not a Laplace transform). The case of positive association is significantly better understood, because in this case the survival function φ∘ℓ(x,y)=E[exp{−Mℓ(x,y)}]\varphi \circ \ell \left(x,y)={\mathbb{E}}\left[\exp \left\{-M\ell \left(x,y)\right\}]equals that of a scale mixture of a bivariate min-stable exponential distribution, which is a well-investigated concept of positive association, see [9, Chapter 6, p. 169 ff]. In particular, for random vectors with survival function exp(−mℓ(x,y))\exp \left(-m\ell \left(x,y)), or equivalently for extreme-value copulas Cφ(x)=exp(−x),ℓ{C}_{\varphi \left(x)=\exp \left(-x),\ell }, there exist exact simulation algorithms due to [4], which are feasible for many ℓ\ell . Our intention is to provide an algorithm that is equally general with respect to the choice of ℓ\ell , but is not restricted to Laplace transforms φ(x)=E[exp(−Mx)]\varphi \left(x)={\mathbb{E}}\left[\exp \left(-Mx)], but works also for strict generators φ\varphi , in particular for the important special case φ(x)=(1−x)+\varphi \left(x)={\left(1-x)}_{+}.We recap the existing literature that forms the basis for our derivation. On the one hand, there is the following decomposition result due to [2].Theorem 1.1(Charpentier et al. [2]) Let ℓ\ell be an orthant–monotonic norm on R2{{\mathbb{R}}}^{2}subject to the normalization ℓ(1,0)=ℓ(0,1)=1\ell \left(1,0)=\ell \left(0,1)=1, and let φ:[0,∞)→[0,1]\varphi :{[}0,\infty )\to \left[0,1]be a continuous and convex function with φ(0)=1\varphi \left(0)=1and limx→∞φ(x)=0{\mathrm{lim}}_{x\to \infty }\varphi \left(x)=0. Then there exists a random vector (Xℓ,Yℓ)\left({X}_{\ell },{Y}_{\ell })with survival function P(Xℓ>x,Yℓ>y)=(1−ℓ(x,y))+{\mathbb{P}}\left({X}_{\ell }\gt x,{Y}_{\ell }\gt y)={(1-\ell \left(x,y))}_{+}and an independent positive random variable Rφ{R}_{\varphi }with the property E[(1−x/Rφ)+]=φ(x){\mathbb{E}}\left[{\left(1-x\text{/}{R}_{\varphi })}_{+}]=\varphi \left(x), whose distribution is uniquely determined by the function φ\varphi . Furthermore, the random vector (X,Y)≔Rφ(Xℓ,Yℓ)\left(X,Y):= {R}_{\varphi }\left({X}_{\ell },{Y}_{\ell })has survival function P(X>x,Y>y)=φ∘ℓ(x,y){\mathbb{P}}\left(X\gt x,Y\gt y)=\varphi \circ \ell \left(x,y), x,y≥0x,y\ge 0, and survival copula Cφ,ℓ{C}_{\varphi ,\ell }. Thus, the random vector (φ(RφXℓ),φ(RφYℓ))(\varphi \left({R}_{\varphi }{X}_{\ell }),\varphi \left({R}_{\varphi }{Y}_{\ell }))has distribution function Cφ,ℓ{C}_{\varphi ,\ell }.In words, Theorem 1.1 states that every Archimax copula arises as scale mixture of the special Archimax copula with generator φ(x)=(1−x)+\varphi \left(x)={\left(1-x)}_{+}, which equals the Archimax copula with minimal association for a fixed ℓ\ell . When simulating (X,Y)\left(X,Y)with survival copula Cφ,ℓ{C}_{\varphi ,\ell }, this allows us to perform two independent subsequent simulation steps: first simulation of Rφ{R}_{\varphi }and second simulation of (Xℓ,Yℓ)\left({X}_{\ell },{Y}_{\ell })with survival copula given in terms of ℓ\ell by Cℓ(x,y)≔Cφ(x)=(1−x)+,ℓ(x,y)=(1−ℓ(1−x,1−y))+.{C}_{\ell }\left(x,y):= {C}_{\varphi \left(x)={\left(1-x)}_{+},\ell }\left(x,y)={(1-\ell \left(1-x,1-y))}_{+}.On the other hand, in the original reference [1] the authors propose an exact simulation algorithm, which we are going to refine. According to Theorem 1.1, we may focus on the simulation of (Xℓ,Yℓ)\left({X}_{\ell },{Y}_{\ell }), which corresponds to the special case φ(x)=(1−x)+\varphi \left(x)={\left(1-x)}_{+}, and in which the method of [1] can be formulated as in the following theorem.Theorem 1.2(Capéraà et al. [1]) Assume that the second derivative Aℓ″{A}_{\ell }^{^{\prime\prime} }of Aℓ{A}_{\ell }exists and is continuous everywhere on (0,1)\left(0,1). Let U,VU,Vand Zℓ{Z}_{\ell }be three independent random variables, where U,V∼U[0,1]U,V\hspace{0.33em} \sim \hspace{0.33em}{\mathcal{U}}\left[0,1]andP(Zℓ≤z)=z+z(1−z)Aℓ′(z)Aℓ(z),z∈[0,1].{\mathbb{P}}\left({Z}_{\ell }\le z)=z+z\left(1-z)\frac{{A}_{\ell }^{^{\prime} }\left(z)}{{A}_{\ell }\left(z)},\hspace{1.0em}z\in \left[0,1].Further denote the density of Zℓ{Z}_{\ell }by fZℓ{f}_{{Z}_{\ell }}(it exists by the assumption of existing Aℓ″{A}_{\ell }^{^{\prime\prime} }) and define the threshold functionpℓ(z)≔z(1−z)Aℓ″(z)fZℓ(z)Aℓ(z),z∈(0,1).{p}_{\ell }\left(z):= \frac{z\left(1-z)\hspace{0.33em}{A}_{\ell }^{^{\prime\prime} }\left(z)}{{f}_{{Z}_{\ell }}\left(z){A}_{\ell }\left(z)},\hspace{1.0em}z\in \left(0,1).Under these assumptions, the random vector (Xℓ,Yℓ)\left({X}_{\ell },{Y}_{\ell })from Theorem 1.1 admits the stochastic representation(3)(Xℓ,Yℓ)∼ZℓAℓ(Zℓ),1−ZℓAℓ(Zℓ),ifU>pℓ(Zℓ)VZℓAℓ(Zℓ),1−ZℓAℓ(Zℓ),ifU≤pℓ(Zℓ).\left({X}_{\ell },{Y}_{\ell })\hspace{0.33em} \sim \hspace{0.33em}\left\{\begin{array}{ll}\left(\frac{{Z}_{\ell }}{{A}_{\ell }\left({Z}_{\ell })},\frac{1-{Z}_{\ell }}{{A}_{\ell }\left({Z}_{\ell })}\right),& {if}\hspace{0.33em}U\gt {p}_{\ell }\left({Z}_{\ell })\\ V\left(\frac{{Z}_{\ell }}{{A}_{\ell }\left({Z}_{\ell })},\frac{1-{Z}_{\ell }}{{A}_{\ell }\left({Z}_{\ell })}\right),& {if}\hspace{0.33em}U\le {p}_{\ell }\left({Z}_{\ell }).\end{array}\right.Our proposed algorithm below provides concrete implementation advice for simulation from Cℓ{C}_{\ell }. For the simulation of Cℓ{C}_{\ell }we use an alternative approach than Theorem 1.2, in particular we do not require Aℓ″{A}_{\ell }^{^{\prime\prime} }to exist, but we do make use of the distributional statement (3) in our derivation. The main idea is to use the well-known representation (4)Aℓ(x)=2E[max{Qx,(1−Q)(1−x)}]=1+∫0x1−2P(Q≤1−q)dq,{A}_{\ell }\left(x)=2{\mathbb{E}}\left[\max \left\{Qx,\left(1-Q)\left(1-x)\right\}]=1+\underset{0}{\overset{x}{\int }}1-2{\mathbb{P}}\left(Q\le 1-q){\rm{d}}q,which defines a one-to-one relationship between the parameterizing norm ℓ\ell and the distribution of a random variable Q∈[0,1]Q\in \left[0,1]with 2E[Q]=12{\mathbb{E}}\left[Q]=1. This representation of ℓ\ell in terms of QQis called Pickands representation in extreme-value theory. The last equality in (4) shows that the right-hand version of the derivative Aℓ′{A}_{\ell }^{^{\prime} }, which is well-defined everywhere by convexity of Aℓ{A}_{\ell }and which we simply denote by Aℓ′{A}_{\ell }^{^{\prime} }throughout, essentially determines the distribution function of QQand, if existing, the second derivative Aℓ″{A}_{\ell }^{^{\prime\prime} }equals the density of 1−Q1-Q. Whereas the analytical treatment of extreme-value copulas in terms of Aℓ{A}_{\ell }is traditional, the article [18] provides some convincing arguments as to why a treatment in terms of the cdf of QQ(i.e., essentially Aℓ′{A}_{\ell }^{^{\prime} }) can sometimes be more convenient. The appearance of Aℓ′{A}_{\ell }^{^{\prime} }and Aℓ″{A}_{\ell }^{^{\prime\prime} }in (3) above thus hints at the possibility to reformulate the simulation algorithm in terms of QQ. In fact, we manage to provide a simulation algorithm for Cℓ{C}_{\ell }that is exact and feasible whenever the following pre-requisites are met:Assumption 1(Exact evaluation of ℓ\ell ) We can evaluate ℓ(y,x)\ell (y,x)exactly and efficiently for arbitrary x,y≥0x,y\ge 0.Assumption 2(Exact evaluation of Aℓ′{A}_{\ell }^{^{\prime} }) We can evaluate Aℓ′(x){A}_{\ell }^{^{\prime} }\left(x)exactly and efficiently for arbitrary x∈(0,1)x\in \left(0,1).Assumption 3(Exact simulation of QQ) The random variable QQcan be simulated exactly and efficiently.Assumptions 1 and 2 are clearly mild, in particular as they are minimal requirements in order to have hope for simulation according to (3) or the inverse Rosenblatt transformation method via partial derivatives to work at all. The decisive question the thoughtful reader should raise at this point is: “Why is it useful to re-formulate the known model (3) in terms of QQ?” To this end, we point out that due to the aforementioned Pickands representation the random variable QQplays a central role in extreme-value theory, in particular the law of QQis feasible very often, as we will demonstrate below in terms of a comprehensive survey of models. For instance, [4] provides two simulation algorithms for extreme-value copulas precisely under Assumption 3. While the algorithms of [4] are restricted to extreme-value copulas but treat the general case of dd-dimensional extreme-value copulas also for d>2d\gt 2, the algorithm we develop is restricted to the bivariate case only but can be leveraged to an algorithm for arbitrary bivariate Archimax copulas. The dd-dimensional generalization remains an interesting open problem, to date only solved for the special case ℓ=‖.‖p\ell ={\Vert .\Vert }_{p}, see [15].The remainder of the article is organized as follows. Section 2 derives our simulation algorithm, while Section 3 demonstrates it by providing examples. Section 4 discusses some interesting aspects of the particular case when ℓ\ell is symmetric, and Section 5 treats the situation of bivariate reciprocal Archimax copulas, in particular indicating that our simulation algorithm for bivariate Archimax copulas immediately implies an exact simulation algorithm for bivariate reciprocal Archimax copulas as well. Section 6 finally concludes.2Exact simulation from Cℓ{C}_{\ell }in terms of QQWe derive an alternative stochastic representation for the random vector (Xℓ,Yℓ)\left({X}_{\ell },{Y}_{\ell })in Theorem 1.1, which serves as convenient basis for simulation under our Assumptions 1–3. One problem with the stochastic representation of Theorem 1.2 is that it requires Aℓ″{A}_{\ell }^{^{\prime\prime} }to exist, which is restrictive. Our method will resolve this problem. The other problem is that the probability law of Zℓ{Z}_{\ell }is rather complicated to simulate in many cases. Our proposed method replaces the requirement to simulate Zℓ{Z}_{\ell }with the requirement to simulate QQfrom the Pickands representation (4), and we demonstrate that this is feasible in many cases.For the sake of clarity, we recall that the survival function of (Xℓ,Yℓ)\left({X}_{\ell },{Y}_{\ell })is given by (1−ℓ(x,y))+=Cℓ(1−x,1−y){(1-\ell \left(x,y))}_{+}\hspace{0.25em}={C}_{\ell }\left(1-x,1-y), and the distribution function of (Xℓ,Yℓ)\left({X}_{\ell },{Y}_{\ell })equals (5)Cˆℓ(x,y)≔x+y−1+Cℓ(1−x,1−y),{\hat{C}}_{\ell }\left(x,y):= x+y-1+{C}_{\ell }\left(1-x,1-y),which is itself a copula (the survival copula of Cℓ{C}_{\ell }) and (1−Xℓ,1−Yℓ)∼Cℓ\left(1-{X}_{\ell },1-{Y}_{\ell })\hspace{0.33em} \sim \hspace{0.33em}{C}_{\ell }. Our proposed method to simulate (Xℓ,Yℓ)\left({X}_{\ell },{Y}_{\ell })requires the exact simulation of an absolutely continuous random variable on (0,1)\left(0,1)whose density hQ{h}_{Q}depends on a realization of QQfrom the Pickands representation (4). As a first auxiliary step, the following lemma states the explicit form of this density. It furthermore provides the basis for an exact and efficient simulation from hQ{h}_{Q}via the so-called rejection–acceptance sampling method, see [12, p. 234–235] for background. The idea of rejection–acceptance sampling for a (possibly complicated) density hhis to find an easy-to-simulate density gg, called comparison density, and a finite constant c>0c\gt 0such that h≤cgh\le cg. Given this, one may iteratively simulate variates Xg∼g{X}_{g}\hspace{0.33em} \sim \hspace{0.33em}gand independent U∼U[0,1]U\hspace{0.33em} \sim \hspace{0.33em}{\mathcal{U}}\left[0,1]until finally U≤h(Xg)/(cg(Xg))U\le h\left({X}_{g})\hspace{0.1em}\text{/}\hspace{0.1em}(cg\left({X}_{g})), and then return the resulting sample Xg{X}_{g}satisfying this condition as a sample from hh.Lemma 2.1(Auxiliary density and associated comparison density) Let ℓ\ell be an orthant–monotonic norm on R2{{\mathbb{R}}}^{2}which is normalized to satisfy ℓ(0,1)=ℓ(1,0)=1\ell \left(0,1)=\ell \left(1,0)=1, and fix q∈[0,1]q\in \left[0,1]. With the positive constantcq≔1−2ℓ1q,11−q,ifq∈(0,1)1,ifq∈{0,1}{c}_{q}:= \left\{\begin{array}{ll}1-\frac{2}{\ell \left(\frac{1}{q},\frac{1}{1-q}\right)},& {if}\hspace{0.33em}q\in \left(0,1)\\ 1,& {if}\hspace{0.33em}q\in \left\{0,1\right\}\end{array}\right.the functionhq(x)≔1cq1−1Aℓ(x)1−2x−x(1−x)Aℓ′(x)Aℓ(x),ifx<1−q1+1Aℓ(x)1−2x−x(1−x)Aℓ′(x)Aℓ(x),ifx≥1−q,x∈(0,1),{h}_{q}\left(x):= \frac{1}{{c}_{q}}\left\{\begin{array}{ll}1-\frac{1}{{A}_{\ell }\left(x)}\left(1-2x-\frac{x\left(1-x){A}_{\ell }^{^{\prime} }\left(x)}{{A}_{\ell }\left(x)}\right),& {if}\hspace{0.33em}x\lt 1-q\\ 1+\frac{1}{{A}_{\ell }\left(x)}\left(1-2x-\frac{x\left(1-x){A}_{\ell }^{^{\prime} }\left(x)}{{A}_{\ell }\left(x)}\right),& {if}\hspace{0.33em}x\ge 1-q\\ \end{array}\right.,\hspace{1.0em}x\in \left(0,1),defines a probability density on (0,1)\left(0,1). Furthermore, we have that cqhq≤dqgq{c}_{q}{h}_{q}\le {d}_{q}{g}_{q}, where gq{g}_{q}is a piece-wise polynomial probability density on (0,1)\left(0,1)that is given by(6)gq(x)≔1dq[1{x<1−q}(1+2x)+1{x≥1−q}(3−2x)],x∈(0,1),{g}_{q}\left(x):= \frac{1}{{d}_{q}}{[}{1}_{\left\{x\lt 1-q\right\}}\left(1+2x)+{1}_{\left\{x\ge 1-q\right\}}\left(3-2x)],\hspace{1.0em}x\in \left(0,1),with normalizing constant dq≔2(1−q(1−q))>0{d}_{q}:= 2\left(1-q\left(1-q))\gt 0.ProofWe observe that the function Hq(x)≔1cqx−x(1−x)Aℓ(x),ifx<1−qx+x(1−x)Aℓ(x)−2(1−q)qAℓ(1−q),ifx≥1−q,{H}_{q}\left(x):= \frac{1}{{c}_{q}}\left\{\begin{array}{ll}x-\frac{x\left(1-x)}{{A}_{\ell }\left(x)},& \hspace{0.1em}\text{if}\hspace{0.1em}\hspace{0.33em}x\lt 1-q\\ x+\frac{x\left(1-x)}{{A}_{\ell }\left(x)}-2\frac{\left(1-q)q}{{A}_{\ell }\left(1-q)},& \hspace{0.1em}\text{if}\hspace{0.1em}\hspace{0.33em}x\ge 1-q,\\ \end{array}\right.is a continuous distribution function on [0,1]\left[0,1]for arbitrary q∈[0,1]q\in \left[0,1]with Hq(0)=0{H}_{q}\left(0)=0and Hq(1)=1{H}_{q}\left(1)=1. To verify this, we need to check that x±x(1−x)/Aℓ(x)x\pm x\left(1-x)\hspace{0.1em}\text{/}\hspace{0.1em}{A}_{\ell }\left(x)is non-decreasing, which follows from the identity ∂∂xx±x(1−x)Aℓ(x)=1±P(Zℓ>x)−xAℓ(x),\frac{\partial }{\partial x}\left(x\pm \frac{x\left(1-x)}{{A}_{\ell }\left(x)}\right)=1\pm \frac{{\mathbb{P}}\left({Z}_{\ell }\gt x)-x}{{A}_{\ell }\left(x)},and the bounds P(Zℓ>x)∈[0,1]{\mathbb{P}}\left({Z}_{\ell }\gt x)\in \left[0,1]as well as Aℓ(x)≥max{x,1−x}{A}_{\ell }\left(x)\ge \max \left\{x,1-x\right\}. We further observe easily with similar estimates, using again the similarity of the definition of hq(x){h}_{q}\left(x)and P(Zℓ≤x){\mathbb{P}}\left({Z}_{\ell }\le x), that the density hq=Hq′{h}_{q}={H}_{q}^{^{\prime} }of Hq{H}_{q}is bounded from above by dqgq/cq{d}_{q}{g}_{q}\hspace{0.1em}\text{/}\hspace{0.1em}{c}_{q}for gq{g}_{q}given in (6).□A random variable Xgq{X}_{{g}_{q}}with density gq{g}_{q}can be simulated via the inversion method, see [12, p. 234–235] for background, since its cumulative distribution function Gq(x)≔P(Xgq≤x)=∫0xgq(y)dy,x∈[0,1],{G}_{q}\left(x):= {\mathbb{P}}\left({X}_{{g}_{q}}\le x)=\underset{0}{\overset{x}{\int }}{g}_{q}(y){\rm{d}}y,\hspace{1.0em}x\in \left[0,1],can be inverted in closed form according to Gq−1(v)=−12+121+8(1−q(1−q))v,ifv<(1−q)(2−q)2(1−q(1−q))32−129−8((1−q(1−q))v+q(1−q)),else,{G}_{q}^{-1}\left(v)=\left\{\begin{array}{ll}-\frac{1}{2}+\frac{1}{2}\sqrt{1+8\left(1-q\left(1-q))v},& \hspace{0.1em}\text{if}\hspace{0.1em}\hspace{0.33em}v\lt \frac{\left(1-q)\left(2-q)}{2\left(1-q\left(1-q))}\\ \frac{3}{2}-\frac{1}{2}\sqrt{9-8\left(\left(1-q\left(1-q))v+q\left(1-q))},& \hspace{0.1em}\text{else}\hspace{0.1em},\end{array}\right.for v∈(0,1)v\in \left(0,1). Consequently, Lemma 2.1 implies that a random variable Xhq{X}_{{h}_{q}}with density hq{h}_{q}can be simulated exactly via the rejection–acceptance method under Assumptions 1 and 2. Algorithm 1 summarizes the resulting simulation method for Xhq∼hq{X}_{{h}_{q}}\hspace{0.33em} \sim \hspace{0.33em}{h}_{q}based on this idea.Algorithm 1. Simulation of Xhq∼hq{X}_{{h}_{q}}\hspace{0.33em} \sim \hspace{0.33em}{h}_{q}via rejection–acceptance1:procedure SimulateH(qq)2:U←1\hspace{1.0em}U\leftarrow 1and T←0T\leftarrow 03:\hspace{1.0em}while U>TU\gt Tdo4:\hspace{2.0em}Simulate U,VU,Vuniform on [0,1]\left[0,1]5:\hspace{2.0em}if V<(1−q)(2−q)2(1−q(1−q))V\lt \frac{\left(1-q)\left(2-q)}{2\left(1-q\left(1-q))}then6:Xgq←−12+121+8(1−q(1−q))V\hspace{2.0em}\hspace{1.0em}{X}_{{g}_{q}}\leftarrow -\frac{1}{2}+\frac{1}{2}\sqrt{1+8\left(1-q\left(1-q))V}7:\hspace{2.0em}else8:Xgq←32−129−8((1−q(1−q))V+q(1−q))\hspace{2.0em}\hspace{1.0em}{X}_{{g}_{q}}\leftarrow \frac{3}{2}-\frac{1}{2}\sqrt{9-8\left(\left(1-q\left(1-q))\hspace{0.33em}V+q\left(1-q))}9:\hspace{2.0em}end if10:⊳Xgq\hspace{2.0em}\hspace{2.0em}\hspace{2.0em}&#x22B3;\hspace{1em}{X}_{{g}_{q}}has comparison density gq{g}_{q}given in (6)11:\hspace{2.0em}if Xgq<1−q{X}_{{g}_{q}}\lt 1-qthen12:T←11+2Xgq1−1Aℓ(Xgq)1−2Xgq−Xgq(1−Xgq)Aℓ′(Xgq)Aℓ(Xgq)\hspace{2.0em}\hspace{1.0em}T\leftarrow \frac{1}{1+2{X}_{{g}_{q}}}\left[1-\frac{1}{{A}_{\ell }\left({X}_{{g}_{q}})}\left(1-2{X}_{{g}_{q}}-\frac{{X}_{{g}_{q}}\left(1-{X}_{{g}_{q}}){A}_{\ell }^{^{\prime} }\left({X}_{{g}_{q}})}{{A}_{\ell }\left({X}_{{g}_{q}})}\right)\right]13:\hspace{2.0em}else14:T←13−2Xgq1+1Aℓ(Xgq)1−2Xgq−Xgq(1−Xgq)Aℓ′(Xgq)Aℓ(Xgq)\hspace{2.0em}\hspace{1.0em}T\leftarrow \frac{1}{3-2{X}_{{g}_{q}}}\left[1+\frac{1}{{A}_{\ell }\left({X}_{{g}_{q}})}\left(1-2{X}_{{g}_{q}}-\frac{{X}_{{g}_{q}}\left(1-{X}_{{g}_{q}}){A}_{\ell }^{^{\prime} }\left({X}_{{g}_{q}})}{{A}_{\ell }\left({X}_{{g}_{q}})}\right)\right]15:\hspace{2.0em}end if16:\hspace{1.0em}end while17:\hspace{1.0em}return Xhq←Xgq{X}_{{h}_{q}}\leftarrow {X}_{{g}_{q}}18:end procedureBased on the availability of an efficient simulation of Xhq{X}_{{h}_{q}}via Algorithm 1, the following theorem is the main result of the present article.Theorem 2.2(Exact simulation of Cℓ{C}_{\ell }) Let ℓ\ell be an orthant–monotonic norm on R2{{\mathbb{R}}}^{2}subject to the normalization ℓ(0,1)=ℓ(1,0)=1\ell \left(0,1)=\ell \left(1,0)=1. Let V∼U[0,1]V\hspace{0.33em} \sim \hspace{0.33em}{\mathcal{U}}\left[0,1]and QQfrom the Pickands representation (4) of Aℓ{A}_{\ell }be independent random variables. Given QQ(and independent of VV), let XhQ{X}_{{h}_{Q}}be a random variable with density hQ{h}_{Q}, simulated according to Algorithm 1. The random vector (Xℓ,Yℓ)\left({X}_{\ell },{Y}_{\ell })with survival function Cℓ(1−x,1−y){C}_{\ell }\left(1-x,1-y)and distribution function Cˆℓ{\hat{C}}_{\ell }has the stochastic representation(7)(Xℓ,Yℓ)∼XhQAℓ(XhQ),1−XhQAℓ(XhQ),ifℓV2Q,V2(1−Q)≥1V2Q,V2(1−Q),ifℓV2Q,V2(1−Q)<1.\left({X}_{\ell },{Y}_{\ell })\hspace{0.33em} \sim \hspace{0.33em}\left\{\begin{array}{ll}\left(\frac{{X}_{{h}_{Q}}}{{A}_{\ell }\left({X}_{{h}_{Q}})},\frac{1-{X}_{{h}_{Q}}}{{A}_{\ell }\left({X}_{{h}_{Q}})}\right),& {if}\hspace{0.33em}\ell \left(\frac{V}{2Q},\frac{V}{2\left(1-Q)}\right)\ge 1\\ \left(\frac{V}{2Q},\frac{V}{2\left(1-Q)}\right),& {if}\hspace{0.33em}\ell \left(\frac{V}{2Q},\frac{V}{2\left(1-Q)}\right)\lt 1.\\ \end{array}\right.ProofIt is sufficient to prove the claim for absolutely continuous QQ. Why? An arbitrary QQ(possibly with atoms) with E[Q]=1/2{\mathbb{E}}\left[Q]=1\hspace{0.1em}\text{/}\hspace{0.1em}2may be approximated arbitrarily well in distribution with a sequence Qn{Q}_{n}of absolutely continuous random variables satisfying E[Qn]=1/2{\mathbb{E}}\left[{Q}_{n}]=1\hspace{0.1em}\text{/}\hspace{0.1em}2. These correspond to a sequence of stable tail dependence functions ℓn{\ell }_{n}that converge point-wise, and this implies that the respective distributional statement also holds in the limit ℓ\ell associated with QQ. Note that this logic relies on the fact that convergence in distribution Qn→Q{Q}_{n}\to Qstands in one-to-one relation with the point-wise convergence ℓn→ℓ{\ell }_{n}\to \ell , resp. Cℓn→Cℓ{C}_{{\ell }_{n}}\to {C}_{\ell }.Under the assumption of absolute continuity of QQ, ℓ\ell is differentiable and it is clear that Cˆℓ{\hat{C}}_{\ell }has a singular component on the ℓ\ell -simplex {(x,y)∈[0,1]2:ℓ(x,y)=1}\left\{\left(x,y)\in {\left[0,1]}^{2}:\ell \left(x,y)=1\right\}, has zero mass outside the simplex and has a density inside the simplex. Consequently, we may compute this density of Cˆℓ{\hat{C}}_{\ell }inside the simplex by taking the partial derivative with respect to both components. To this end, we denote by fv(q1,q2){f}_{v}\left({q}_{1},{q}_{2})the density of the random vector (v/2Q,v/2(1−Q))\left(v\hspace{0.1em}\text{/}2Q,v\text{/}\hspace{0.1em}2\left(1-Q))for arbitrary but fixed v∈(0,1)v\in \left(0,1), and obtain for x,yx,ywith ℓ(x,y)<1\ell \left(x,y)\lt 1∂2∂x∂yCˆℓ(x,y)=−∂2∂x∂yℓ(x,y)=∂2∂x∂y∫01P(max{2Qx,2(1−Q)y}≤v)dv=∫01∂2∂x∂yPv2Q>x,v2(1−Q)>ydv=∫01fv(x,y)dv.\begin{array}{rcl}\frac{{\partial }^{2}}{\partial x\partial y}{\hat{C}}_{\ell }\left(x,y)& =& -\frac{{\partial }^{2}}{\partial x\partial y}\ell \left(x,y)=\frac{{\partial }^{2}}{\partial x\partial y}\underset{0}{\overset{1}{\displaystyle \int }}{\mathbb{P}}(\max \left\{2Qx,2\left(1-Q)y\right\}\le v){\rm{d}}v\\ & =& \underset{0}{\overset{1}{\displaystyle \int }}\frac{{\partial }^{2}}{\partial x\partial y}{\mathbb{P}}\left(\frac{v}{2Q}\gt x,\frac{v}{2\left(1-Q)}\gt y\right){\rm{d}}v=\underset{0}{\overset{1}{\displaystyle \int }}{f}_{v}\left(x,y){\rm{d}}v.\end{array}Denoting by pCˆℓ{p}_{{\hat{C}}_{\ell }}the mass that dCˆℓ{\rm{d}}{\hat{C}}_{\ell }allocates on the ℓ\ell -simplex, by dCˆℓs{\rm{d}}{\hat{C}}_{\ell }^{s}its singular and by dCˆℓc{\rm{d}}{\hat{C}}_{\ell }^{c}its continuous part, we have the decomposition Cˆℓ=(1−pCˆℓ)Cˆℓc+pCˆℓCˆℓs{\hat{C}}_{\ell }=\left(1-{p}_{{\hat{C}}_{\ell }}){\hat{C}}_{\ell }^{c}+{p}_{{\hat{C}}_{\ell }}{\hat{C}}_{\ell }^{s}. We obtain for x,y∈[0,1]x,y\in \left[0,1]from the above expression (1−pCˆℓ)Cˆℓc(x,y)=∫0x∫0y∫01fv(s,t)dv1{ℓ(s,t)<1}dsdt=E∫011{v2Q≤x,v2(1−Q)≤y}1{ℓv2Q,v2(1−Q)<1}dv=PV2Q≤x,V2(1−Q)≤y,ℓV2Q,V2(1−Q)<1.\begin{array}{rcl}\left(1-{p}_{{\hat{C}}_{\ell }}){\hat{C}}_{\ell }^{c}\left(x,y)& =& \underset{0}{\overset{x}{\displaystyle \int }}\underset{0}{\overset{y}{\displaystyle \int }}\underset{0}{\overset{1}{\displaystyle \int }}{f}_{v}\left(s,t){\rm{d}}v{1}_{\left\{\ell \left(s,t)\lt 1\right\}}{\rm{d}}s{\rm{d}}t\\ & =& {\mathbb{E}}\left[\underset{0}{\overset{1}{\displaystyle \int }}{1}_{\left\{\tfrac{v}{2Q}\le x,\tfrac{v}{2\left(1-Q)}\le y\right\}}{1}_{\left\{\ell \left(\tfrac{v}{2Q},\tfrac{v}{2\left(1-Q)}\right)\lt 1\right\}}{\rm{d}}v\right]\\ & =& {\mathbb{P}}\left(\frac{V}{2Q}\le x,\frac{V}{2\left(1-Q)}\le y,\ell \left(\frac{V}{2Q},\frac{V}{2\left(1-Q)}\right)\lt 1\right).\end{array}Furthermore, we obtain the probability 1−pCˆℓ1-{p}_{{\hat{C}}_{\ell }}by plugging in x=y=1x=y=1, which yields 1−pCˆℓ=PV<min2Q,2(1−Q),1ℓ12Q,12(1−Q)=PV<1ℓ12Q,12(1−Q)=PℓV2Q,V2(1−Q)<1=E1ℓ12Q,12(1−Q).\begin{array}{rcl}1-{p}_{{\hat{C}}_{\ell }}& =& {\mathbb{P}}\left(V\lt \min \left\{2Q,2\left(1-Q),\frac{1}{\ell \left(\frac{1}{2Q},\frac{1}{2\left(1-Q)}\right)}\right\}\right)={\mathbb{P}}\left(V\lt \frac{1}{\ell \left(\frac{1}{2Q},\frac{1}{2\left(1-Q)}\right)}\right)\\ & =& {\mathbb{P}}\left[\ell \left(\frac{V}{2Q},\frac{V}{2\left(1-Q)}\right)\lt 1\right]={\mathbb{E}}\left[\frac{1}{\ell \left(\frac{1}{2Q},\frac{1}{2\left(1-Q)}\right)}\right].\end{array}Note that the last equality is stated only for later reference, and implicitly these computations rely on the fact that 1≥min{2Q,2(1−Q)}≥1/ℓ12Q,12(1−Q)1\ge \min \left\{2Q,2\left(1-Q)\right\}\ge 1\hspace{0.1em}\text{/}\hspace{0.1em}\ell \left(\frac{1}{2Q},\frac{1}{2\left(1-Q)}\right)almost surely. This already proves the first part of our claimed statement.For the second part, from Theorem 1.2 we obtain that Cˆℓs(x,y)=PZℓℓ(Zℓ,1−Zℓ)≤x,1−Zℓℓ(Zℓ,1−Zℓ)≤yU>pℓ(Zℓ).{\hat{C}}_{\ell }^{s}\left(x,y)={\mathbb{P}}\left(\left.\frac{{Z}_{\ell }}{\ell \left({Z}_{\ell },1-{Z}_{\ell })}\le x,\hspace{1em}\frac{1-{Z}_{\ell }}{\ell \left({Z}_{\ell },1-{Z}_{\ell })}\le y\right|\hspace{0.33em}U\gt {p}_{\ell }\left({Z}_{\ell })\right).Using the well-known identity Aℓ′(z)=1−2P(Q≤1−z){A}_{\ell }^{^{\prime} }\left(z)=1-2{\mathbb{P}}\left(Q\le 1-z)from (4) and the definition of pℓ{p}_{\ell }and P(Zℓ≤z){\mathbb{P}}\left({Z}_{\ell }\le z)in Theorem 1.2, we find E[1−pℓ(Zℓ)]=E1−1ℓ12Q,12(1−Q)︸=cQ,see Lemma 2.1=PℓV2Q,V2(1−Q)>1=pCˆℓ,{\mathbb{E}}\left[1-{p}_{\ell }\left({Z}_{\ell })]={\mathbb{E}}\left[\mathop{\underbrace{1-\frac{1}{\ell \left(\frac{1}{2Q},\frac{1}{2\left(1-Q)}\right)}}}\limits_{={c}_{Q},\hspace{0.1em}\text{see Lemma 2.1}\hspace{0.1em}}\right]={\mathbb{P}}\left(\ell \left(\frac{V}{2Q},\frac{V}{2\left(1-Q)}\right)\gt 1\right)={p}_{{\hat{C}}_{\ell }},as well as pCˆℓP(Zℓ≤z∣U>pℓ(Zℓ))=Ez+z(1−z)Aℓ(z)−1ℓ12(1−z),12z,ifz≤1−Q1ℓ12Q,12(1−Q),ifz>1−Q.{p}_{{\hat{C}}_{\ell }}{\mathbb{P}}\left({Z}_{\ell }\le z| U\gt {p}_{\ell }\left({Z}_{\ell }))={\mathbb{E}}\left[z+\frac{z\left(1-z)}{{A}_{\ell }\left(z)}-\left.\left\{\begin{array}{ll}\frac{1}{\ell \left(\frac{1}{2\left(1-z)},\frac{1}{2\hspace{0.33em}z}\right)},& \hspace{0.1em}\text{if}\hspace{0.1em}\hspace{0.33em}z\le 1-Q\\ \frac{1}{\ell \left(\frac{1}{2Q},\frac{1}{2\left(1-Q)}\right)},& \hspace{0.1em}\text{if}\hspace{0.1em}\hspace{0.33em}z\gt 1-Q\\ \end{array}\right.\right\}\right].In the last expression we recognize the distribution function HQ{H}_{Q}from the proof of Lemma 2.1, so that this finally implies P(Zℓ≤z∣U>pℓ(Zℓ))=EcQpCˆℓHQ(z)=PXhQ≤z∣ℓV2Q,V2(1−Q)>1,{\mathbb{P}}\left({Z}_{\ell }\le z\hspace{0.33em}| \hspace{0.33em}U\gt {p}_{\ell }\left({Z}_{\ell }))={\mathbb{E}}\left[\frac{{c}_{Q}}{{p}_{{\hat{C}}_{\ell }}}\hspace{0.33em}{H}_{Q}\left(z)\right]={\mathbb{P}}\left({X}_{{h}_{Q}}\le z| \ell \left(\frac{V}{2Q},\frac{V}{2\left(1-Q)}\right)\gt 1\right),which establishes the claimed statement, when comparing with (3).□As mentioned in Section 1, the simulation of an arbitrary bivariate Archimax copula Cφ,ℓ{C}_{\varphi ,\ell }requires simulation from Cℓ{C}_{\ell }, which we have now established, and, independently, simulation of a random radius Rφ>0{R}_{\varphi }\gt 0, whose probability distribution is determined uniquely by the Williamson-2-transform φ(x)≔E[(1−x/Rφ)+]\varphi \left(x):= {\mathbb{E}}\left[{\left(1-x\text{/}{R}_{\varphi })}_{+}]. We first note that the distribution function of Rφ{R}_{\varphi }is given by the Williamson-inversion formula (8)P(Rφ≤x)=1−φ(x)+xφ′(x),{\mathbb{P}}\left({R}_{\varphi }\le x)=1-\varphi \left(x)+x{\varphi }^{^{\prime} }\left(x),where φ′{\varphi }^{^{\prime} }is meant to be the right-hand version of the (by convexity almost everywhere existing) derivative of φ\varphi . Regarding examples for feasible Rφ{R}_{\varphi }, we refer to [16]. Generally speaking, rejection–acceptance sampling of Rφ{R}_{\varphi }based on (8) is feasible whenever φ″{\varphi }^{^{\prime\prime} }exists and is analytically feasible and if the (then existing) density fRφ(x)=xφ″(x){f}_{{R}_{\varphi }}\left(x)=x{\varphi }^{^{\prime\prime} }\left(x)can be bounded from above by a scalar multiple of an easy-to-simulate density. In contrast, we end this section by providing a simulation algorithm for the special case (9)φ(x)=1−∫(0,∞](1−e−sx)ν(ds)+,\varphi \left(x)={\left(1-\mathop{\int }\limits_{(0,\infty ]}(1-{e}^{-sx})\nu \left({\rm{d}}s)\right)}_{+},where (1−e−s)ν(ds)(1-{e}^{-s})\nu \left({\rm{d}}s)is a probability measure on (0,∞](0,\infty ], under the following assumption:Assumption 4(Requirement to simulate Rφ{R}_{\varphi }with generator (9)) The probability measure ρν(ds)≔(1−e−s)ν(ds){\rho }_{\nu }\left({\rm{d}}s):= (1-{e}^{-s})\nu \left({\rm{d}}s)can be simulated exactly.A generator of the form (9) always implies Rφ≤1{R}_{\varphi }\le 1. There are examples for generators φ\varphi of that form which are not analytically tractable on first glimpse, in particular for which it is not straightforward to apply direct acceptance-rejection sampling based on the density fRφ{f}_{{R}_{\varphi }}, an example is provided below.The function Ψν(x)≔∫(0,∞](1−e−xs)ν(ds),x≥0,{\Psi }_{\nu }\left(x):= \mathop{\int }\limits_{(0,\infty ]}(1-{e}^{-xs})\nu \left({\rm{d}}s),\hspace{1.0em}x\ge 0,is a so-called Bernstein function satisfying Ψν(1)=1{\Psi }_{\nu }\left(1)=1, in particular is concave, so that φ(x)≔(1−Ψν(x))+\varphi \left(x):= {(1-{\Psi }_{\nu }\left(x))}_{+}equals the Williamson-2-transform of some random variable taking values in (0,1](0,1], i.e., φ\varphi is a proper Archimedean generator. Every Rφ{R}_{\varphi }that arises in this way can be simulated under Assumption 4. Why? We observe that P(Rφ≤x)=Ψν(x)−xΨν′(x)=∫(0,∞]1−e−xs(1+xs)1−e−s︸≕Gs(x)ρν(ds).{\mathbb{P}}\left({R}_{\varphi }\le x)={\Psi }_{\nu }\left(x)-x{\Psi }_{\nu }^{^{\prime} }\left(x)=\mathop{\int }\limits_{(0,\infty ]}\mathop{\underbrace{\frac{1-{e}^{-xs}\left(1+x\hspace{0.33em}s)}{1-{e}^{-s}}}}\limits_{=: {G}_{s}\left(x)}{\rho }_{\nu }\left({\rm{d}}s).For fixed s∈(0,∞]s\in (0,\infty ], the function Gs:[0,1]→[0,1]{G}_{s}:\left[0,1]\to \left[0,1]is the distribution function of a random variable Rs{R}_{s}on [0,1]\left[0,1]. Furthermore, simulation of Rs∼Gs{R}_{s}\hspace{0.33em} \sim \hspace{0.33em}{G}_{s}is efficient and exact via acceptance-rejection, since Rs{R}_{s}takes values in (0,1](0,1], has an atom at 1 with probability exp{−s}s/(1−exp{−s})\exp \left\{-s\right\}s\hspace{0.1em}\text{/}\hspace{0.1em}\left(1-\exp \left\{-s\right\})and a density Gs′{G}_{s}^{^{\prime} }on (0,1)\left(0,1)that is bounded from above by x↦xs2/(1−exp{−s})x\mapsto x{s}^{2}\hspace{0.1em}\text{/}\hspace{0.1em}\left(1-\exp \left\{-s\right\}). Summing up, in order to simulate Rφ{R}_{\varphi }, the following steps need to be carried out: Simulate S∼ρνS\hspace{0.33em} \sim \hspace{0.33em}{\rho }_{\nu }, using Assumption 4.Simulate B∼Bernoulli(exp{−S}S/(1−exp{−S}))B\hspace{0.33em} \sim \hspace{0.33em}\hspace{0.1em}\text{Bernoulli}\hspace{0.1em}(\exp \left\{-S\right\}S\hspace{0.1em}\text{/}\hspace{0.1em}\left(1-\exp \left\{-S\right\})).If B=1B=1, return Rφ=1{R}_{\varphi }=1.Otherwise, repeatedly simulate U,V∼U[0,1]U,V\hspace{0.33em} \sim \hspace{0.33em}{\mathcal{U}}\left[0,1]independent, until finally U≤e−VSU\le {e}^{-\sqrt{V}S}, and then return Rφ=V{R}_{\varphi }=\sqrt{V}.There exist some examples for closed-form analytic functions Ψν{\Psi }_{\nu }for which Ψν″{\Psi }_{\nu }^{^{\prime\prime} }(hence the resulting density fRφ{f}_{{R}_{\varphi }}) is nasty to deal with, but for which Assumption 4 is still satisfied. For instance, we may extract the following example from [13, Example 3.3]. With parameters α∈[0,1)\alpha \in {[}0,1)and θ≥0\theta \ge 0we consider the Bernstein function Ψν(x)=xΓ(x+θ)Γ(2+θ−α)Γ(1+θ)Γ(x+θ+1−α),x≥0,{\Psi }_{\nu }\left(x)=x\frac{\Gamma \left(x+\theta )\Gamma \left(2+\theta -\alpha )}{\Gamma \left(1+\theta )\Gamma \left(x+\theta +1-\alpha )},\hspace{1.0em}x\ge 0,which corresponds to the measure ν(ds)=Γ(2+θ−α)Γ(θ+1)Γ(1−α)e−θs(1−e−s)α+1(αe−s+θ(1−e−s)).\nu \left({\rm{d}}s)=\frac{\Gamma \left(2+\theta -\alpha )}{\Gamma \left(\theta +1)\Gamma \left(1-\alpha )}\frac{{e}^{-\theta s}}{{(1-{e}^{-s})}^{\alpha +1}}(\alpha {e}^{-s}+\theta (1-{e}^{-s})).The associated random variable SSwith probability distribution ρν{\rho }_{\nu }has density given by (1−α)gθ,2−α+αgθ+1,1−α\left(1-\alpha ){g}_{\theta ,2-\alpha }+\alpha {g}_{\theta +1,1-\alpha }, where ga,b{g}_{a,b}denotes the density of −log(Ba,b)-\log \left({B}_{a,b})and Ba,b{B}_{a,b}has Beta distribution with density proportional to x↦xa−1(1−x)b−1x\mapsto {x}^{a-1}{\left(1-x)}^{b-1}. The simulation of SSis thus straightforward as a mixture of logarithms of Beta distributions, see [12, p. 243] for advice on how to simulate Ba,b{B}_{a,b}. Apparently, computing the second derivative of Ψν{\Psi }_{\nu }and developing a rejection–acceptance sampling method for Rφ{R}_{\varphi }in this example appears to be rather inconvenient, whereas the presented method to simulate Rφ{R}_{\varphi }and its associated Archimax copulas only requires to evaluate Ψν{\Psi }_{\nu }(in order to transform margins to uniforms), which is easy.3Examples for feasible ℓ\ell We review some popular families of stable tail dependence functions ℓ\ell , for which the simulation of the associated QQis viable, thereby demonstrating that our algorithm is feasible in many interesting cases.3.1QQcorresponding to conditionally iid extreme-value copulasMany (but not all, see [14]) parametric families of symmetric stable tail dependence functions can be considered jointly under one common umbrella, by observing that they are of the particular form ℓ(x,y)=ℓF(x,y)≔∫0∞1−FuxFuydu,\ell \left(x,y)={\ell }_{F}\left(x,y):= \underset{0}{\overset{\infty }{\int }}1-F\left(\frac{u}{x}\right)F\left(\frac{u}{y}\right){\rm{d}}u,where F:[0,∞)→[0,1]F:{[}0,\infty )\to \left[0,1]equals the distribution function of a non-negative random variable XF{X}_{F}with unit mean. In probabilistic terms, this means that ℓF(x,y)=E[max{xXF,1,yXF,2}],x,y≥0,{\ell }_{F}\left(x,y)={\mathbb{E}}\left[\max \left\{x\hspace{0.33em}{X}_{F,1},\hspace{0.33em}y\hspace{0.33em}{X}_{F,2}\right\}],\hspace{1.0em}x,y\ge 0,where XF,1,XF,2∼F{X}_{F,1},{X}_{F,2}\hspace{0.33em} \sim \hspace{0.33em}Fare iid random variables. These particular stable tail dependence functions are studied in [10], where it is shown that a random vector (X,Y)\left(X,Y)with survival function exp(−ℓF(x,y))\exp \left(-{\ell }_{F}\left(x,y))is conditionally iid. The probability distribution of Q=QFQ={Q}_{F}in this case can be simulated by (10)Q∼1{B=0}YFYF+XF,2+1{B=1}XF,1XF,1+YF,Q\hspace{0.33em} \sim \hspace{0.33em}{1}_{\left\{B=0\right\}}\frac{{Y}_{F}}{{Y}_{F}+{X}_{F,2}}+{1}_{\left\{B=1\right\}}\frac{{X}_{F,1}}{{X}_{F,1}+{Y}_{F}},where XF,1,XF,2∼F{X}_{F,1},{X}_{F,2}\hspace{0.33em} \sim \hspace{0.33em}F, B∼B\hspace{0.33em} \sim \hspace{0.33em}Bernoulli(1/2)\left(1\hspace{0.1em}\text{/}\hspace{0.1em}2)and YF∼xdF(x){Y}_{F}\hspace{0.33em} \sim \hspace{0.33em}x{\rm{d}}F\left(x)are independent random variables. For many popular models, the distribution FFas well as its so-called size bias YF{Y}_{F}can be simulated efficiently. We provide some examples below. Note in particular that for both of the following two examples, the Hüsler-Reiss example and the Galambos example, simulation algorithms based on partial derivatives require numerical root search algorithms, while the presented algorithm is exact.Example 3.1(Hüsler-Reiss family) With a parameter θ>0\theta \gt 0we consider the norm ℓ(x,y)=yΦθ+12θlogyx+xΦθ+12θlogxy,x,y≥0,\ell \left(x,y)=y\Phi \left[\theta +\frac{1}{2\theta }\log \left(\frac{y}{x}\right)\right]+x\Phi \left[\theta +\frac{1}{2\theta }\log \left(\frac{x}{y}\right)\right],\hspace{1.0em}x,y\ge 0,where Φ\Phi denotes the cdf of a standard normally distributed random variable. This is of the form ℓF{\ell }_{F}with F(x)=Φ[(log(x)+θ2/2)/θ]F\left(x)=\Phi \left[\left(\log \left(x)+{\theta }^{2}\hspace{0.1em}\text{/}\hspace{0.1em}2)\hspace{0.1em}\text{/}\hspace{0.1em}\theta ]a log-normal distribution. The random variables XF∼F{X}_{F}\hspace{0.33em} \sim \hspace{0.33em}Fand YF∼xdF(x){Y}_{F}\hspace{0.33em} \sim \hspace{0.33em}x{\rm{d}}F\left(x)can be simulated according to XF∼e−θ2+2θW,YF∼eθ2+2θW,W∼Φ.{X}_{F}\hspace{0.33em} \sim \hspace{0.33em}{e}^{-{\theta }^{2}+\sqrt{2}\theta W},\hspace{1.0em}{Y}_{F}\hspace{0.33em} \sim \hspace{0.33em}{e}^{{\theta }^{2}+\sqrt{2}\theta W},\hspace{1.0em}W\hspace{0.33em} \sim \hspace{0.33em}\Phi .The required derivative of Aℓ′{A}_{\ell }^{^{\prime} }is available in closed form, given by Aℓ′(x)=Φθ+12θlogx1−x−Φθ+12θlog1−xx+ϕθ+12θlogx1−x12θ(1−x)−ϕθ+12θlog1−xx12θx,\begin{array}{rcl}{A}_{\ell }^{^{\prime} }\left(x)& =& \Phi \left[\theta +\frac{1}{2\theta }\log \left(\frac{x}{1-x}\right)\right]-\Phi \left[\theta +\frac{1}{2\theta }\log \left(\frac{1-x}{x}\right)\right]\\ & & +\phi \left[\theta +\frac{1}{2\theta }\log \left(\frac{x}{1-x}\right)\right]\frac{1}{2\theta \left(1-x)}-\phi \left[\theta +\frac{1}{2\theta }\log \left(\frac{1-x}{x}\right)\right]\frac{1}{2\theta x},\end{array}where ϕ(x)=Φ′(x)=exp(−x2/2)/2π\phi \left(x)={\Phi }^{^{\prime} }\left(x)=\exp \left(-{x}^{2}\hspace{0.1em}\text{/}\hspace{0.1em}2)\hspace{0.1em}\text{/}\hspace{0.1em}\sqrt{2\pi }equals the density of WW.Example 3.2(Galambos family) With a parameter θ>0\theta \gt 0we consider the norm ℓ(x,y)=x+y−(x−θ+y−θ)−1θ,x,y≥0.\ell \left(x,y)=x+y-{({x}^{-\theta }+{y}^{-\theta })}^{-\tfrac{1}{\theta }},\hspace{1.0em}x,y\ge 0.This is of the form ℓF{\ell }_{F}with F(x)=1−exp{−[Γ(1+1/θ)x]θ}F\left(x)=1-\exp \left\{-{\left[\Gamma \left(1+1\text{/}\theta )x]}^{\theta }\right\}a Weibull distribution. (Note that in [10, Example 2] there is a typo, mistaking θ\theta for 1/θ1\hspace{0.1em}\text{/}\hspace{0.1em}\theta .) The random variable XF∼F{X}_{F}\hspace{0.33em} \sim \hspace{0.33em}Fcan be simulated exactly via the inversion method and YF∼xdF(x){Y}_{F}\hspace{0.33em} \sim \hspace{0.33em}x{\rm{d}}F\left(x)according to YF∼G1/θΓ(1+1/θ),G∼Γ(1+1/θ,1),{Y}_{F}\hspace{0.33em} \sim \hspace{0.33em}\frac{{G}^{1\text{/}\theta }}{\Gamma \left(1+1\hspace{0.1em}\text{/}\hspace{0.1em}\theta )},\hspace{1.0em}G\hspace{0.33em} \sim \hspace{0.33em}\Gamma \left(1+1\hspace{0.1em}\text{/}\hspace{0.1em}\theta ,1),where Γ(β,η)\Gamma \left(\beta ,\eta )denotes the Gamma distribution with density proportional to x↦e−ηxxβ−1x\mapsto {e}^{-\eta x}{x}^{\beta -1}. The required derivative of Aℓ′{A}_{\ell }^{^{\prime} }is available in closed form, given by Aℓ′(x)=(x−θ+(1−x)−θ)−1θ−1((1−x)−θ−1−x−θ−1).{A}_{\ell }^{^{\prime} }\left(x)={({x}^{-\theta }+{\left(1-x)}^{-\theta })}^{-\tfrac{1}{\theta }-1}({\left(1-x)}^{-\theta -1}-{x}^{-\theta -1}).Example 3.3(Cuadras-Augé family) With a parameter θ∈(0,1]\theta \in (0,1]we consider the norm ℓ(x,y)=max{x,y}+(1−θ)min{x,y}=θ‖(x,y)‖∞+(1−θ)‖(x,y)‖1.\ell \left(x,y)=\max \left\{x,y\right\}+\left(1-\theta )\min \left\{x,y\right\}=\theta {\Vert \left(x,y)\Vert }_{\infty }+\left(1-\theta ){\Vert \left(x,y)\Vert }_{1}.This is of the form ℓF{\ell }_{F}with F(x)=1−θ+θ1{θx≥1}F\left(x)=1-\theta +\theta {1}_{\left\{\theta x\ge 1\right\}}, meaning that P(XF=0)=1−θ=1−P(XF=1/θ){\mathbb{P}}\left({X}_{F}=0)=1-\theta =1-{\mathbb{P}}\left({X}_{F}=1\hspace{0.1em}\text{/}\hspace{0.1em}\theta ). Furthermore, the associated size bias YF=1/θ{Y}_{F}=1\hspace{0.1em}\text{/}\hspace{0.1em}\theta equals a constant in this case. The required derivative Aℓ′{A}_{\ell }^{^{\prime} }is piece-wise constant, taking the value −θ-\theta on (0,1/2)\left(0,1\hspace{0.1em}\text{/}\hspace{0.1em}2)and the value θ\theta on [1/2,1){[}1\hspace{0.1em}\text{/}\hspace{0.1em}2,1). The case of general piece-wise constant Aℓ′{A}_{\ell }^{^{\prime} }is further discussed in Section 3.2.Example 3.4(Gumbel family ℓ=‖.‖p\ell ={\Vert .\Vert }_{p}) When ℓ(x,y)=‖(x,y)‖p=(xp+yp)1/p\ell \left(x,y)={\Vert \left(x,y)\Vert }_{p}={({x}^{p}+{y}^{p})}^{1\text{/}p}for some p>1p\gt 1, then ℓ=ℓF\ell ={\ell }_{F}with associated Fréchet distribution function F(x)=exp{−[Γ(1−1/p)x]−p}F\left(x)=\exp \left\{-{\left[\Gamma \left(1-1\text{/}p)x]}^{-p}\right\}. In particular, XF∼F{X}_{F}\hspace{0.33em} \sim \hspace{0.33em}Fmay be simulated exactly via the inversion method, and the size bias YF{Y}_{F}may be simulated according to YF∼G−1/pΓ(1−1/p),G∼Γ(1−1/p,1),{Y}_{F}\hspace{0.33em} \sim \hspace{0.33em}\frac{{G}^{-1\text{/}p}}{\Gamma \left(1-1\hspace{0.1em}\text{/}\hspace{0.1em}p)},\hspace{1.0em}G\hspace{0.33em} \sim \hspace{0.33em}\Gamma \left(1-1\hspace{0.1em}\text{/}\hspace{0.1em}p,1),where Γ(β,η)\Gamma \left(\beta ,\eta )denotes the Gamma distribution with density proportional to x↦e−ηxxβ−1x\mapsto {e}^{-\eta x}{x}^{\beta -1}. The required derivative Aℓ′{A}_{\ell }^{^{\prime} }is known in closed form, to wit Aℓ′(x)=(xp+(1−x)p)−1p−1(xp−1−(1−x)p−1).{A}_{\ell }^{^{\prime} }\left(x)={({x}^{p}+{\left(1-x)}^{p})}^{-\tfrac{1}{p}-1}({x}^{p-1}-{\left(1-x)}^{p-1}).In [13,11] it is further explained how to simulate QQassociated with convex mixtures of stable tail dependence functions of the type ℓF{\ell }_{F}, and how this can be used to construct interesting and tractable new families. The following is an example in this spirit.Example 3.5(Holmgren-Liouville family) Let SSbe some random variable taking values in (0,1)\left(0,1). Then the function Ψ(x)=1−E[(1−S)x]E[S],x≥0,\Psi \left(x)=\frac{1-{\mathbb{E}}{[}{\left(1-S)}^{x}]}{{\mathbb{E}}\left[S]},\hspace{1.0em}x\ge 0,is a so-called Bernstein function. Assume further that SSis either absolutely continuous or discrete. In the absolutely continuous case we denote by fS{f}_{S}its density, and in the discrete case we denote by fS{f}_{S}the function fS(z)=P(S=z){f}_{S}\left(z)={\mathbb{P}}\left(S=z), thus treating both the absolutely continuous case and the discrete case jointly with our notation. Similarly, we consider a random variable ZZwith associated function fZ(z)=1E[1+S/log(1−S)]1z+1log(1−z)zfS(z),z∈(0,1].{f}_{Z}\left(z)=\frac{1}{{\mathbb{E}}\left[1+S/\log \left(1-S)]}\left(\frac{1}{z}+\frac{1}{\log \left(1-z)}\right)z{f}_{S}\left(z),\hspace{1.0em}z\in (0,1].This function is bounded from above by fZ(z)≤2fS(z)E[S],z∈(0,1].{f}_{Z}\left(z)\le \frac{{2f}_{S}\left(z)}{{\mathbb{E}}\left[S]},\hspace{1.0em}z\in (0,1].Consequently, we can simulate ZZexactly via acceptance-rejection, whenever we can simulate SS, see [12, p. 235–237]. Now we consider ℓ(x,y)=2Ψ(2)xyx+y−2xyx+y−max{x,y}Ψ1−min{x,y}max{x,y},\ell \left(x,y)=\frac{2\Psi \left(2)x\hspace{0.33em}y}{x+y}-\left(\frac{2x\hspace{0.33em}y}{x+y}-\max \left\{x,y\right\}\right)\Psi \left(1-\frac{\min \left\{x,y\right\}}{\max \left\{x,y\right\}}\right),which is an orthant–monotonic norm with ℓ(1,0)=ℓ(0,1)=1\ell \left(1,0)=\ell \left(0,1)=1, hence a stable tail dependence function. This norm arises as a convex probability mixture of norms of the form ℓFz{\ell }_{{F}^{z}}with F(x)=min{exp(x−1),1}F\left(x)=\min \left\{\exp \left(x-1),1\right\}, when randomizing the power zz, as demonstrated in [13, Example 3.6]. The associated random variable QQcan be simulated as follows: Simulate ZZaccording to the aforementioned rejection–acceptance method (requires simulation of SS).Simulate YF{Y}_{F}from the continuous density fYF(x)=Z2e−ZZ+e−Z−1xeZx,x∈(0,1),{f}_{{Y}_{F}}\left(x)=\frac{{Z}^{2}{e}^{-Z}}{Z+{e}^{-Z}-1}x{e}^{Zx},\hspace{1.0em}x\in \left(0,1),which can be done via acceptance-rejection with comparison density g(x)=2xg\left(x)=2x.Simulate XF,1,XF,2{X}_{F,1},{X}_{F,2}iid from the distribution function F(x)=min{exp(Zx−Z),1}F\left(x)=\min \left\{\exp \left(Zx-Z),1\right\}, which is straightforward by the inversion method.Apply formula (10) in order to simulate QQfrom the simulated XF,1,XF,2,YF{X}_{F,1},{X}_{F,2},{Y}_{F}and a Bernoulli variable BB.In order to make this example feasible, the choice of SSrequires availability of Ψ\Psi in closed form, as well as an exact simulation method for SS. Examples that work certainly include finite-valued SS, as well as SSwith beta distribution whose density is proportional to x↦xa−1(1−x)b−1x\mapsto {x}^{a-1}{\left(1-x)}^{b-1}for parameters a,b>0a,b\gt 0. In the latter case, we observe that (11)Ψ(x)=a+bb1−β(a,b+x)β(a,b),\Psi \left(x)=\frac{a+b}{b}\left(1-\frac{\beta \left(a,b+x)}{\beta \left(a,b)}\right),where β(.,.)\beta \left(.,.)denotes the beta function. This example originates from a random vector (X,Y)\left(X,Y)with survival function exp(−ℓ(x,y))\exp \left(-\ell \left(x,y))that is defined by (X,Y)=(LϵX−1,LϵY−1)\left(X,Y)=({L}_{{\epsilon }_{X}}^{-1},{L}_{{\epsilon }_{Y}}^{-1}), where ϵX,ϵY{\epsilon }_{X},{\epsilon }_{Y}are iid standard exponential random variables and, independently, L={Lt}t≥0L={\left\{{L}_{t}\right\}}_{t\ge 0}is a so-called Holmgren-Liouville subordinator. The latter is a non-decreasing stochastic process with long-range dependence properties. In particular, (X,Y)\left(X,Y)are iid conditioned on LL. For the sake of completeness, we point out that Aℓ′{A}_{\ell }^{^{\prime} }is explicitly given by Aℓ′(x)=2Ψ(2)(1−2x)−(1−4max{x,1−x})Ψ1−min{x,1−x}max{x,1−x}+1−2max{x,1−x}max{x,1−x}Ψ′1−min{x,1−x}max{x,1−x}.\begin{array}{rcl}{A}_{\ell }^{^{\prime} }\left(x)& =& 2\Psi \left(2)\left(1-2x)-\left[(1-4\max \left\{x,1-x\right\})\Psi \left(1-\frac{\min \left\{x,1-x\right\}}{\max \left\{x,1-x\right\}}\right)\right.\\ & & \left.+\frac{1-2\max \left\{x,1-x\right\}}{\max \left\{x,1-x\right\}}{\Psi }^{^{\prime} }\left(1-\frac{\min \left\{x,1-x\right\}}{\max \left\{x,1-x\right\}}\right)\right].\end{array}In the particular special case with SShaving a beta distribution, the derivative Ψ′{\Psi }^{^{\prime} }of Ψ\Psi in (11) is given in terms of the Digamma function DΓ(x)=Γ′(x)/Γ(x){D}_{\Gamma }\left(x)={\Gamma }^{^{\prime} }\left(x)\hspace{0.1em}\text{/}\hspace{0.1em}\Gamma \left(x), which is pre-implemented in most software packages just like the Gamma function Γ(.)\Gamma \left(.)and Beta function β(.,.)\beta \left(.,.), to wit Ψ′(x)=a+bbβ(a,b+x)β(a,b)(DΓ(a+b+x)−DΓ(b+x)).{\Psi }^{^{\prime} }\left(x)=\frac{a+b}{b}\frac{\beta \left(a,b+x)}{\beta \left(a,b)}({D}_{\Gamma }\left(a+b+x)-{D}_{\Gamma }\left(b+x)).3.2Finite-valued QQ, resp. piece-wise constant Aℓ{A}_{\ell }The Pickands dependence function Aℓ{A}_{\ell }is piece-wise linear if and only if the associated random variable QQis discrete. We may approximate an arbitrary Aℓ{A}_{\ell }(from above due to convexity) by the function that takes the identical values on the grid i/ni\hspace{0.1em}\text{/}\hspace{0.1em}nfor n≥1n\ge 1and i=0,…,ni=0,\ldots ,nand which is piece-wise linear in between these grid points. The latter approximation corresponds to some finite-valued Qn{Q}_{n}. Since the corresponding approximation Aℓ,n{A}_{\ell ,n}converges point-wise to Aℓ{A}_{\ell }by construction, Qn{Q}_{n}converges in distribution to QQ, and also the associated copulas Cℓ,n{C}_{\ell ,n}approximate Cℓ{C}_{\ell }point-wise. From this perspective, the case of finite-valued QQis of utmost importance.Regarding the implementation, we recommend to apply the strategy proposed in [18], that is to work on the level of Aℓ′{A}_{\ell }^{^{\prime} }. To this end, we consider a Pickands dependence function Aℓ,n{A}_{\ell ,n}that is piece-wise constant between given values v0≔0<v1<…<vn≔1{v}_{0}:= 0\lt {v}_{1}\lt \ldots \lt {v}_{n}:= 1, with n≥1n\ge 1arbitrary but fixed. According to the methodology in [18], we parameterize this function in terms of the vectors v=(v0,…,vn){\boldsymbol{v}}=\left({v}_{0},\ldots ,{v}_{n})and a=(a1,…,an){\boldsymbol{a}}=\left({a}_{1},\ldots ,{a}_{n})subject to the conditions −1≤a1≤a2≤…≤an≤1,∑i=1nai(vi−vi−1)=0.-1\le {a}_{1}\le {a}_{2}\hspace{0.33em}\le \ldots \le {a}_{n}\le 1,\hspace{1.0em}\mathop{\sum }\limits_{i=1}^{n}{a}_{i}\left({v}_{i}-{v}_{i-1})=0.Intuitively, we define Aℓ,n{A}_{\ell ,n}piece-wise constant with Aℓ,n′(x)=ai{A}_{\ell ,n}^{^{\prime} }\left(x)={a}_{i}for vi−1≤x<vi{v}_{i-1}\le x\lt {v}_{i}, i=1,…,ni=1,\ldots ,n. With i(x)≔max{i∈{0,…,n}:vi≤x}i\left(x):= \max \left\{i\in \left\{0,\ldots ,n\right\}:{v}_{i}\le x\right\}, Aℓ,n{A}_{\ell ,n}is given by Aℓ,n(x)=1+(x−vi(x))ai(x)+1+∑j=1i(x)aj(vj−vj−1).{A}_{\ell ,n}\left(x)=1+(x-{v}_{i\left(x)}){a}_{i\left(x)+1}+\mathop{\sum }\limits_{j=1}^{i\left(x)}{a}_{j}\left({v}_{j}-{v}_{j-1}).With the convenient notation a0≔−1{a}_{0}:= -1, the associated random variable Qn{Q}_{n}has distribution function P(Qn≤q)=(1−ai(1−q))/2{\mathbb{P}}\left({Q}_{n}\le q)=\left(1-{a}_{i\left(1-q)})\hspace{0.1em}\text{/}\hspace{0.1em}2. The probability mass function is given by P(Qn=0)=1−an2,P(Qn=1)=1+a12,P(Qn=1−vi)=ai+1−ai2,i=1,…,n−1.{\mathbb{P}}\left({Q}_{n}=0)=\frac{1-{a}_{n}}{2},\hspace{1.0em}{\mathbb{P}}\left({Q}_{n}=1)=\frac{1+{a}_{1}}{2},\hspace{1em}{\mathbb{P}}({Q}_{n}=1-{v}_{i})=\frac{{a}_{i+1}-{a}_{i}}{2},\hspace{1.0em}i=1,\ldots ,n-1.An arbitrary Aℓ{A}_{\ell }may now be approximated by a piece-wise constant one of the above form Aℓ,n{A}_{\ell ,n}with v=(0,1/n,…,(n−1)/n,1){\boldsymbol{v}}=\left(0,1\hspace{0.1em}\text{/}\hspace{0.1em}n,\ldots ,\left(n-1)\hspace{0.1em}\text{/}\hspace{0.1em}n,1)by computing the parameters a1,…,an{a}_{1},\ldots ,{a}_{n}iteratively from the relation Aℓin=1+1n∑j=1iaj,i=1,…,n,{A}_{\ell }\left(\frac{i}{n}\right)=1+\frac{1}{n}\mathop{\sum }\limits_{j=1}^{i}{a}_{j},\hspace{1.0em}i=1,\ldots ,n,meaning that a1=nAℓ1n−1,ai=nAℓin−1−1n∑j=1i−1aj,i≥2.{a}_{1}=n\left[{A}_{\ell }\left(\frac{1}{n}\right)-1\right],\hspace{1.0em}{a}_{i}=n\left[{A}_{\ell }\left(\frac{i}{n}\right)-1-\frac{1}{n}\mathop{\sum }\limits_{j=1}^{i-1}{a}_{j}\right],\hspace{1.0em}i\ge 2.Since exact simulation of the finite-valued Qn{Q}_{n}is possible, like the evaluation of Aℓ,n{A}_{\ell ,n}and Aℓ,n′{A}_{\ell ,n}^{^{\prime} }, our algorithm is feasible. Figure 1 visualizes points simulated from Cˆℓ,n{\hat{C}}_{\ell ,n}when ℓ\ell is from the Hüsler-Reiss family in Example 3.1.Figure 1Scatter plot of 2,500 samples from Cˆℓ{\hat{C}}_{\ell }with ℓ\ell given by the Hüsler-Reiss family with parameter θ=1/4\theta =1\hspace{0.1em}\text{/}\hspace{0.1em}4(gray points), as well as from its approximation Cˆℓ,n{\hat{C}}_{\ell ,n}(black points) with n=10n=10(left plot) and n=80n=80(right plot).4The specific case of symmetric normsFrom the proof of Theorem 2.2 we recall the decomposition of the survival copula Cˆℓ{\hat{C}}_{\ell }in (5) of the Archimax copula Cℓ{C}_{\ell }into the parts on the ℓ\ell -simplex and inside the ℓ\ell -simplex. To wit, the distribution function Cˆℓ{\hat{C}}_{\ell }of the random vector (Xℓ,Yℓ)\left({X}_{\ell },{Y}_{\ell })in Theorem 2.2 admits the decomposition Cˆℓ=(1−pCˆℓ)Cˆℓc+pCˆℓCˆℓs,{\hat{C}}_{\ell }=\left(1-{p}_{{\hat{C}}_{\ell }}){\hat{C}}_{\ell }^{c}+{p}_{{\hat{C}}_{\ell }}{\hat{C}}_{\ell }^{s},where pCˆℓ=P(ℓ(Xℓ,Yℓ)=1){p}_{{\hat{C}}_{\ell }}={\mathbb{P}}(\ell \left({X}_{\ell },{Y}_{\ell })=1)denotes the probability mass on the ℓ\ell -simplex, Cˆℓs{\hat{C}}_{\ell }^{s}denotes the cdf of a singular distribution concentrated on the ℓ\ell -simplex and Cˆℓc{\hat{C}}_{\ell }^{c}denotes the cdf of a probability distribution concentrated inside the ℓ\ell -simplex. Note that Cˆℓc{\hat{C}}_{\ell }^{c}is only absolutely continuous, if the associated random variable QQfrom the Pickands representation (4) of ℓ\ell is absolutely continuous, such as assumed in the proof of Theorem 2.2. In the general case, it has singular parts concentrated on straight lines corresponding to atoms of QQ, as can be seen in Figure 1.In the special case of symmetric ℓ\ell , that is if ℓ(x,y)=ℓ(y,x)\ell \left(x,y)=\ell (y,x)for arbitrary x,yx,y, there is one phenomenon that we find interesting enough to discuss in the present context. The distribution function Cˆℓs{\hat{C}}_{\ell }^{s}equals the distribution function of a counter-monotonic random vector, meaning that Cˆℓs(x,y)=(G1,ℓ(x)+G2,ℓ(y)−1)+,x,y∈[0,1],{\hat{C}}_{\ell }^{s}\left(x,y)={({G}_{1,\ell }\left(x)+{G}_{2,\ell }(y)-1)}_{+},\hspace{1.0em}x,y\in \left[0,1],with marginal distribution functions G1,ℓ{G}_{1,\ell }and G2,ℓ{G}_{2,\ell }, respectively. A stochastic representation for (Xs,ℓ,Ys,ℓ)∼Cˆℓs\left({X}_{s,\ell },{Y}_{s,\ell })\hspace{0.33em} \sim \hspace{0.33em}{\hat{C}}_{\ell }^{s}is hence well-known to be given by (G1,ℓ−1(W),G2,ℓ−1(1−W))({G}_{1,\ell }^{-1}\left(W),{G}_{2,\ell }^{-1}\left(1-W))for W∼U[0,1]W\hspace{0.33em} \sim \hspace{0.33em}{\mathcal{U}}\left[0,1]. Since we know that this random vector is concentrated on the ℓ\ell -unit simplex, the norm ℓ\ell determines the implicit relationship between G1,ℓ{G}_{1,\ell }and G2,ℓ{G}_{2,\ell }via the equation ℓ(G1,ℓ−1(w),G2,ℓ−1(1−w))=1for allw∈[0,1],\ell ({G}_{1,\ell }^{-1}\left(w),{G}_{2,\ell }^{-1}\left(1-w))=1\hspace{1em}\hspace{0.1em}\text{for all}\hspace{0.1em}\hspace{0.33em}w\in \left[0,1],so that G1,ℓ{G}_{1,\ell }determines G2,ℓ{G}_{2,\ell }and vice versa. Now if ℓ\ell is symmetric, it follows from exchangeability of Cˆℓs{\hat{C}}_{\ell }^{s}that G1,ℓ=G2,ℓ≕Gℓ{G}_{1,\ell }={G}_{2,\ell }\hspace{0.33em}=: \hspace{0.33em}{G}_{\ell }. The distribution function Gℓ{G}_{\ell }is then uniquely and implicitly determined by ℓ\ell via the equation (12)ℓ(Gℓ−1(w),Gℓ−1(1−w))=1for allw∈[0,1].\ell ({G}_{\ell }^{-1}\left(w),{G}_{\ell }^{-1}\left(1-w))=1\hspace{0.33em}\hspace{0.1em}\text{for all}\hspace{0.1em}\hspace{0.33em}w\in \left[0,1].We furthermore see that ℓ(x,y)=1⇔∃w∈[0,1]:(x,y)=(Gℓ−1(w),Gℓ−1(1−w))⇔Gℓ(x)+Gℓ(y)=1,\ell \left(x,y)=1\iff \exists w\in \left[0,1]:\left(x,y)=({G}_{\ell }^{-1}\left(w),{G}_{\ell }^{-1}\left(1-w))\iff {G}_{\ell }\left(x)+{G}_{\ell }(y)=1,which provides an alternative way to describe the unit simplex of ℓ\ell in terms of the distribution function Gℓ{G}_{\ell }. Summarizing, we obtain a correspondence between symmetric ℓ\ell and convex and continuous distribution functions Gℓ{G}_{\ell }on [0,1]\left[0,1]. (An arbitrary distribution function on [0,1]\left[0,1]that is convex is always continuous on [0,1){[}0,1), but in general may have a jump at 1, which we exclude by saying that Gℓ{G}_{\ell }is continuous on all of [0,1]\left[0,1].)Lemma 4.1(Correspondence between symmetric ℓ\ell and Gℓ{G}_{\ell }) For a symmetric stable tail dependence function ℓ\ell the distribution function Gℓ{G}_{\ell }is continuous and convex on [0,1]\left[0,1]. Conversely, any continuous and convex distribution function on [0,1]\left[0,1]implicitly defines a symmetric stable tail dependence function ℓ=ℓG\ell ={\ell }_{G}, which is uniquely determined by its unit simplex {(x,y)∈[0,1]2:G(x)+G(y)=1}\left\{\left(x,y)\in {\left[0,1]}^{2}:G\left(x)+G(y)=1\right\}.ProofContinuity of Gℓ{G}_{\ell }is immediate, since a jump would imply a non-empty interval (a,b)⊂(0,1)\left(a,b)\subset \left(0,1)on which Gℓ−1{G}_{\ell }^{-1}was constant, contradicting the fact that w↦(Gℓ−1(w),Gℓ−1(1−w))w\mapsto ({G}_{\ell }^{-1}\left(w),{G}_{\ell }^{-1}\left(1-w))is the closed curve that determines the ℓ\ell -unit simplex. To verify convexity, assume the converse, that is assume Gℓ{G}_{\ell }is not convex. Then we necessarily find two distinct x,y∈[0,1]x,y\in \left[0,1]and α∈(0,1)\alpha \in \left(0,1)such that Gℓ(αx+(1−α)y)>αGℓ(x)+(1−α)Gℓ(y).{G}_{\ell }\left(\alpha x+\left(1-\alpha )y)\gt \alpha {G}_{\ell }\left(x)+\left(1-\alpha ){G}_{\ell }(y).By continuity of Gℓ{G}_{\ell }, we can always choose xxand yysuch that the last inequality holds for all α∈(0,1)\alpha \in \left(0,1)jointly. In particular, for fixed α∈(0,1)\alpha \in \left(0,1)we also have this inequality for 1−α1-\alpha , that is Gℓ(αy+(1−α)x)>αGℓ(y)+(1−α)Gℓ(x).{G}_{\ell }\left(\alpha y+\left(1-\alpha )x)\gt \alpha {G}_{\ell }(y)+\left(1-\alpha ){G}_{\ell }\left(x).But this implies Gℓ(x)+Gℓ(y)=αGℓ(x)+(1−α)Gℓ(y)+αGℓ(y)+(1−α)Gℓ(x)<Gℓ(αx+(1−α)y)+Gℓ(αy+(1−α)x).{G}_{\ell }\left(x)+{G}_{\ell }(y)=\alpha {G}_{\ell }\left(x)+\left(1-\alpha ){G}_{\ell }(y)+\alpha {G}_{\ell }(y)+\left(1-\alpha ){G}_{\ell }\left(x)\lt {G}_{\ell }\left(\alpha x+\left(1-\alpha )y)+{G}_{\ell }\left(\alpha y+\left(1-\alpha )x).This implies that ℓ(x,y)<ℓ(αx+(1−α)y,αy+(1−α)x),\ell \left(x,y)\lt \ell (\alpha x+\left(1-\alpha )y,\alpha y+\left(1-\alpha )x),or equivalently in terms of the convex function Aℓ{A}_{\ell }that Aℓxx+y<Aℓαx+(1−α)yx+y≤αAℓxx+y+(1−α)Aℓyx+y,{A}_{\ell }\left(\frac{x}{x+y}\right)\lt {A}_{\ell }\left(\frac{\alpha x+\left(1-\alpha )y}{x+y}\right)\le \alpha {A}_{\ell }\left(\frac{x}{x+y}\right)+\left(1-\alpha ){A}_{\ell }\left(\frac{y}{x+y}\right),for all α∈(0,1)\alpha \in \left(0,1). Reversing the roles of xxand yyin this derivation, we hence obtain that maxAℓxx+y,Aℓyx+y<αAℓxx+y+(1−α)Aℓyx+y,\max \left\{{A}_{\ell }\left(\frac{x}{x+y}\right),{A}_{\ell }\left(\frac{y}{x+y}\right)\right\}\lt \alpha {A}_{\ell }\left(\frac{x}{x+y}\right)+\left(1-\alpha ){A}_{\ell }\left(\frac{y}{x+y}\right),for all α∈(0,1)\alpha \in \left(0,1), contradicting convexity of Aℓ{A}_{\ell }.Conversely, let GGbe a convex and continuous distribution function on [0,1]\left[0,1]. Since norms in R2{{\mathbb{R}}}^{2}are uniquely determined by their unit balls, which are in one-to-one correspondence with closed convex sets containing the origin in their interior, all we need to prove is that the closed set BG≔{(x,y)∈R2:G(x)+G(y)≤1}{B}_{G}:= \{\left(x,y)\in {{\mathbb{R}}}^{2}:G\left(x)+G(y)\le 1\}is convex, which follows immediately from the convexity of GG, as can readily be verified by the readers themselves.□As a consequence, we can improve our simulation algorithm based on the stochastic representation (7) for symmetric ℓ\ell under the following (strong) assumption that is sometimes satisfied:Assumption 5(Strong additional assumption for symmetric ℓ\ell ) We can evaluate exactly the unique implicit function Gℓ−1{G}_{\ell }^{-1}that satisfies (12).Having the possibility to evaluate Gℓ−1{G}_{\ell }^{-1}explicitly, the stochastic representation (7) in the exchangeable case of symmetric ℓ\ell can be replaced with a simpler expression.Lemma 4.2(Simpler sampling algorithm for symmetric ℓ\ell ) We consider the same setting as in Theorem 2.2 with the additional assumption that ℓ(x,y)=ℓ(y,x)\ell \left(x,y)=\ell (y,x)for all x,y≥0x,y\ge 0, and with Assumption 5 satisfied. We further assume that W∼U[0,1]W\hspace{0.33em} \sim \hspace{0.33em}{\mathcal{U}}\left[0,1]is independent of V,QV,Q, which are the same as in Theorem 2.2. Then a random vector (Xℓ,Yℓ)∼Cˆℓ\left({X}_{\ell },{Y}_{\ell })\hspace{0.33em} \sim \hspace{0.33em}{\hat{C}}_{\ell }admits the stochastic representation(Xℓ,Yℓ)∼(Gℓ−1(W),Gℓ−1(1−W)),ifℓV2Q,V2(1−Q)≥1V2Q,V2(1−Q),ifℓV2Q,V2(1−Q)<1.\left({X}_{\ell },{Y}_{\ell })\hspace{0.33em} \sim \hspace{0.33em}\left\{\begin{array}{ll}({G}_{\ell }^{-1}\left(W),{G}_{\ell }^{-1}\left(1-W)),& {if}\hspace{0.33em}\ell \left(\frac{V}{2Q},\frac{V}{2\left(1-Q)}\right)\ge 1\\ \left(\frac{V}{2Q},\frac{V}{2\left(1-Q)}\right),& {if}\hspace{0.33em}\ell \left(\frac{V}{2Q},\frac{V}{2\left(1-Q)}\right)\lt 1.\\ \end{array}\right.ProofThe statement follows from the text preceding this lemma.□We provide two examples for situations in which the implicit definition of Gℓ{G}_{\ell }is simple to make explicit, hence Assumption 5 is met.Example 4.3(The case ℓ=‖.‖p\ell ={\Vert .\Vert }_{p}) Obviously, Gℓ−1(w)=w1/p{G}_{\ell }^{-1}\left(w)={w}^{1\text{/}p}does the job. Note that this is well in line with known results about ‖.‖p{\Vert .\Vert }_{p}-symmetric survival functions in [15].Example 4.4(The case of QQwith just two atoms) If we consider for q∈[0,1/2]q\in \left[0,1\hspace{0.1em}\text{/}\hspace{0.1em}2]the norm ℓ(x,y)=max{qx,(1−q)y}+max{(1−q)x,qy},\ell \left(x,y)=\max \left\{qx,\left(1-q)y\right\}+\max \left\{\left(1-q)x,qy\right\},it is not difficult to verify that Gℓ−1(w)=(q+(1−2q)w)/(1−q){G}_{\ell }^{-1}\left(w)=\left(q+\left(1-2q)w)\hspace{0.1em}\text{/}\hspace{0.1em}\left(1-q)does the job. Note that these stable tail dependence functions form the extremal boundary of the simplex of all symmetric bivariate stable tail dependence functions, see [14]. The probability mass of Cˆℓ{\hat{C}}_{\ell }in this case is fully concentrated on the boundary of the triangle with edges (0,0)\left(0,0), (q/(1−q),1)\left(q\hspace{0.1em}\text{/}\hspace{0.1em}\left(1-q),1)and (0,q/(1−q))\left(0,q\hspace{0.1em}\text{/}\hspace{0.1em}\left(1-q)).As a final remark, we point out that our simulation algorithm can be utilized to simulate a random variable S∼GℓS\hspace{0.33em} \sim \hspace{0.33em}{G}_{\ell }under Assumptions 1–3 on the norm ℓ\ell only, that is, even without having Gℓ{G}_{\ell }or even Gℓ−1{G}_{\ell }^{-1}in explicit form. To this end, we perform the following steps, again with QQdenoting the random variable associated with ℓ\ell according to the Pickands representation (4): Until ℓV2Q,V2(1−Q)>1\ell \left(\frac{V}{2Q},\frac{V}{2\left(1-Q)}\right)\gt 1, repeatedly simulate independent instances of VVand QQ.With QQfrom the previous step, run Algorithm 1 in order to simulate XhQ{X}_{{h}_{Q}}.Return S=XhQ/Aℓ(XhQ)∼GℓS={X}_{{h}_{Q}}\hspace{0.1em}\text{/}\hspace{0.1em}{A}_{\ell }({X}_{{h}_{Q}})\hspace{0.33em} \sim \hspace{0.33em}{G}_{\ell }.In probabilistic terms, S∼GℓS\hspace{0.33em} \sim \hspace{0.33em}{G}_{\ell }has the same distribution as XhQ˜/Aℓ(XhQ˜){X}_{{h}_{\tilde{Q}}}\hspace{0.1em}\text{/}\hspace{0.1em}{A}_{\ell }({X}_{{h}_{\tilde{Q}}}), where XhQ˜∼hQ˜{X}_{{h}_{\tilde{Q}}}\hspace{0.33em} \sim \hspace{0.33em}{h}_{\tilde{Q}}and Q˜\tilde{Q}has distribution given by P(Q˜∈dq)=cqpCˆℓP(Q∈dq).{\mathbb{P}}\left(\tilde{Q}\in {\rm{d}}q)=\frac{{c}_{q}}{{p}_{{\hat{C}}_{\ell }}}{\mathbb{P}}\left(Q\in {\rm{d}}q).5Bivariate reciprocal Archimax copulasThe presented stochastic model for Archimax copulas can be leveraged to obtain an exact simulation algorithm for bivariate random vectors that are max-infinitely divisible and whose exponent measure has ℓ\ell -symmetric survival function. Their associated copulas take the form (2), so that it is plausible to call them reciprocal Archimax copulas, generalizing the terminology of [7] from the special case ℓ=‖.‖1\ell ={\Vert .\Vert }_{1}to general ℓ\ell . We briefly explain this, the logic being identical to the logic outlined in [15, Section 5] for the special case ℓ=‖.‖p\ell ={\Vert .\Vert }_{p}. Concretely, a measure μ\mu on E≔[0,∞]2⧹{(0,0)}E:= {\left[0,\infty ]}^{2}\setminus \left\{\left(0,0)\right\}with μ(E⧹[0,x]×[0,y])<∞∀x,y>0,limx,y→∞μ(E⧹[0,x]×[0,y])=0,\mu \left(E\setminus \left[0,x]\times \left[0,y])\lt \infty \hspace{1.0em}\forall x,y\gt 0,\hspace{1.0em}\mathop{\mathrm{lim}}\limits_{x,y\to \infty }\mu \left(E\setminus \left[0,x]\times \left[0,y])=0,is called an exponent measure. We say that μ\mu has ℓ\ell -symmetric survival function, if the survival function of μ\mu depends on its argument (x,y)\left(x,y)only through its norm ℓ(x,y)\ell \left(x,y), i.e., μ((x,∞]×(y,∞])=ψ(ℓ(x,y))\mu \left((x,\infty ]\times (y,\infty ])=\psi \left(\ell \left(x,y))for some function ψ\psi in one variable. Feasible functions are well-known to comprise such ψ:(0,∞)→[0,∞)\psi :\left(0,\infty )\to {[}0,\infty )that are convex with limx↘0ψ(x)=∞{\mathrm{lim}}_{x\searrow 0}\psi \left(x)=\infty and limx→∞ψ(x)=0{\mathrm{lim}}_{x\to \infty }\psi \left(x)=0, and this is the case if and only if there exists a non-finite Radon measure νψ{\nu }_{\psi }on (0,∞](0,\infty ]with νψ({∞})=0{\nu }_{\psi }\left(\left\{\infty \right\})=0and ψ(x)=∫x∞1−xrνψ(dr).\psi \left(x)=\underset{x}{\overset{\infty }{\int }}\left(1-\frac{x}{r}\right){\nu }_{\psi }\left({\rm{d}}r).Given a pair (ℓ,ψ)\left(\ell ,\psi )of a stable tail dependence function and such convex ψ\psi , defining an exponent measure μ\mu with ℓ\ell -norm symmetric survival function, the random vector (X∗,Y∗)\left({X}_{\ast },{Y}_{\ast })on (0,∞)2{\left(0,\infty )}^{2}with distribution function P(X∗≤x,Y∗≤y)=e−μ(E⧹[0,x]×[0,y])=e−ψ(x)−ψ(y)+ψ(ℓ(x,y)),x,y≥0,{\mathbb{P}}\left({X}_{\ast }\le x,{Y}_{\ast }\le y)={e}^{-\mu \left(E\setminus \left[0,x]\times \left[0,y])}={e}^{-\psi \left(x)-\psi (y)+\psi (\ell \left(x,y))},\hspace{1.0em}x,y\ge 0,is said to be max-infinitely divisible with exponent measure μ\mu . Note that X∗{X}_{\ast }and Y∗{Y}_{\ast }both have the identical marginal distribution function x↦exp{−ψ(x)}x\mapsto \exp \left\{-\psi \left(x)\right\}, so their associated copula is computed to be C∗,ψ,ℓ(x,y)=xyexp{ψ[ℓ(ψ−1{−log(x)},ψ−1{−log(y)})]}=xyCe−ψ,ℓ(x,y).{C}_{\ast ,\psi ,\ell }\left(x,y)=xy\exp \{\psi {[}\ell ({\psi }^{-1}\left\{-\log \left(x)\right\},{\psi }^{-1}\left\{-\log (y)\right\})]\}=\frac{x\hspace{0.33em}y}{{C}_{{e}^{-\psi },\ell }\left(x,y)}.An exact simulation algorithm for C∗,ψ,ℓ{C}_{\ast ,\psi ,\ell }requires exact simulation of (Xℓ,Yℓ)\left({X}_{\ell },{Y}_{\ell }), which is feasible under Assumptions 1–3, as well as the following assumption on ψ\psi , respectively, its representing measure νψ{\nu }_{\psi }:Assumption 6(Additional requirement for reciprocal Archimax copulas) The function ψ\psi and the inverse Gνψ−1{G}_{{\nu }_{\psi }}^{-1}of the survival function Gνψ(x)≔νψ((x,∞]){G}_{{\nu }_{\psi }}\left(x):= {\nu }_{\psi }\left((x,\infty ])can be evaluated exactly.Algorithm 2 provides the pseudo-code for simulation of a sample (U,V)∼C∗,ψ,ℓ\left(U,V)\hspace{0.33em} \sim \hspace{0.33em}{C}_{\ast ,\psi ,\ell }under Assumptions 1, 2, 3, and 6, the proof idea being an immediate generalization of [15, Section 5] from ‖.‖p{\Vert .\Vert }_{p}to general ℓ\ell that the readers can verify themselves. Note that the occurring while-loop certainly ends, since η=Gνψ−1(T)\eta ={G}_{{\nu }_{\psi }}^{-1}\left(T)decreases to zero due to the non-finiteness of νψ{\nu }_{\psi }. In Algorithm 2, the sub-routine SimulateExp returns an exponential random variable with unit mean.Figure 2 depicts scatter plots for two examples of copulas of the form C∗,ψ,ℓ{C}_{\ast ,\psi ,\ell }. In both plots, the function ψ=ψa\psi ={\psi }_{a}with a parameter a>0a\gt 0is chosen to be (13)ψa(x)=a1x1−x1x+12,x>0,{\psi }_{a}\left(x)=a&#x230A;\frac{1}{x}&#x230B;\left(1-\frac{x\left(&#x230A;\frac{1}{x}&#x230B;+1\right)}{2}\right),\hspace{1.0em}x\gt 0,Algorithm 2. Simulation of (U,V)∼C∗,ψ,ℓ\left(U,V)\hspace{0.33em} \sim \hspace{0.33em}{C}_{\ast ,\psi ,\ell }1:procedure Simulate  C∗(ℓ,ψ){C}_{\ast }\left(\ell ,\psi )2:(X∗,Y∗)←(0,0)\hspace{1.0em}\left({X}_{\ast },{Y}_{\ast })\leftarrow \left(0,0)3:T←\hspace{1.0em}T\leftarrow  SimulateExp4:η←Gνψ−1(T)\hspace{1.0em}\eta \leftarrow {G}_{{\nu }_{\psi }}^{-1}\left(T)5:\hspace{1.0em}while η>min{X∗,Y∗}\eta \gt \min \left\{{X}_{\ast },{Y}_{\ast }\right\}6:\hspace{2.0em}Simulate (Xℓ,Yℓ)∼Cˆℓ\left({X}_{\ell },{Y}_{\ell })\hspace{0.33em} \sim \hspace{0.33em}{\hat{C}}_{\ell }according to (7)7:X∗←max{X∗,ηXℓ}\hspace{2.0em}{X}_{\ast }\leftarrow \max \{{X}_{\ast },\eta \hspace{0.33em}{X}_{\ell }\}8:Y∗←max{Y∗,ηYℓ}\hspace{2.0em}{Y}_{\ast }\leftarrow \max \{{Y}_{\ast },\eta \hspace{0.33em}{Y}_{\ell }\}9:T←T+\hspace{2.0em}T\leftarrow T+ SimulateExp10:η←Gνψ−1(T)\hspace{2.0em}\eta \leftarrow {G}_{{\nu }_{\psi }}^{-1}\left(T)11:\hspace{1.0em}end while12:\hspace{1.0em}return (U,V)=(e−ψ(X∗),e−ψ(Y∗))\left(U,V)=({e}^{-\psi \left({X}_{\ast })},{e}^{-\psi \left({Y}_{\ast })})13:end procedurecorresponding to a discrete measure νψ{\nu }_{\psi }whose required inverse of the survival function equals Gνψ−1(x)=1/⌈x/a⌉{G}_{{\nu }_{\psi }}^{-1}\left(x)=1\hspace{-0.08em}\text{/}\hspace{-0.08em}\lceil x\hspace{-0.08em}\text{/}\hspace{-0.08em}a\rceil , see [15, Example 3].Figure 2Scatter plot of 25,000 samples from C∗,ψ,ℓ{C}_{\ast ,\psi ,\ell }with ψa{\psi }_{a}given in (13) for a=1.5a=1.5and two different ℓ\ell (left: ℓ\ell from Example 4.4 with q=0.45q=0.45, right: ℓ\ell piece-wise constant corresponding to some (asymmetric) QQwith six atoms and P(Q=0),P(Q=1)>0{\mathbb{P}}\left(Q=0),{\mathbb{P}}\left(Q=1)\gt 0, i.e., full support).Finally, we collect some immediate properties of C∗,ψ,ℓ{C}_{\ast ,\psi ,\ell }. Whereas traditional concordance measures like Spearman’s Rho or Kendall’s Tau appear to be difficult to compute in convenient form for C∗,ψ,ℓ{C}_{\ast ,\psi ,\ell }, the following properties are straightforward to observe: Support of C∗,ψ,ℓ{C}_{\ast ,\psi ,\ell }: The left and right end points of the support of QQhave an influence on the support of C∗,ψ,ℓ{C}_{\ast ,\psi ,\ell }. Assume that QQhas support inside the interval [bQ,uQ]\left[{b}_{Q},{u}_{Q}]for either bQ>0{b}_{Q}\gt 0and/or uQ<1{u}_{Q}\lt 1. It follows that Aℓ(x)=1−x{A}_{\ell }\left(x)=1-xon [0,bQ]\left[0,{b}_{Q}]and/or Aℓ(x)=x{A}_{\ell }\left(x)=xon [uQ,1]\left[{u}_{Q},1]. A simple computation shows that this implies C∗,ψ,ℓ(x,y)=x,ifx≤exp−ψbQ1−bQψ−1[−log(y)]y,ify≤exp−ψ1−uQuQψ−1[−log(x)],{C}_{\ast ,\psi ,\ell }\left(x,y)=\left\{\begin{array}{ll}x,& \hspace{0.1em}\text{if}\hspace{0.1em}\hspace{0.33em}x\le \exp \left\{-\psi \left(\frac{{b}_{Q}}{1-{b}_{Q}}{\psi }^{-1}\left[-\log (y)]\right)\right\}\\ y,& \hspace{0.1em}\text{if}\hspace{0.1em}\hspace{0.33em}y\le \exp \left\{-\psi \left(\frac{1-{u}_{Q}}{{u}_{Q}}{\psi }^{-1}\left[-\log \left(x)]\right)\right\}\end{array}\right.,so that the copula assigns no mass to these two areas, since the second derivative vanishes. Consequently, if (x,y)\left(x,y)is in the support of C∗,ψ,ℓ{C}_{\ast ,\psi ,\ell }, then necessarily exp−ψ1−uQuQψ−1[−log(x)]≤y≤exp−ψ1−bQbQψ−1[−log(x)].\exp \left\{-\psi \left(\frac{1-{u}_{Q}}{{u}_{Q}}{\psi }^{-1}\left[-\log \left(x)]\right)\right\}\le y\le \exp \left\{-\psi \left(\frac{1-{b}_{Q}}{{b}_{Q}}{\psi }^{-1}\left[-\log \left(x)]\right)\right\}.Note that the lower (upper) bound becomes trivially equal to 0 (equal to 1) if uQ=1{u}_{Q}=1(bQ=0{b}_{Q}=0). The left plot in Figure 2 depicts an example with support truly inside the unit square [0,1]2{\left[0,1]}^{2}, since bQ=0.45=1−uQ{b}_{Q}=0.45=1-{u}_{Q}in this case.Tail dependence: Just like in case of bivariate Archimax copulas, the coefficients of upper- and lower-tail dependence depend on regular variation properties of the generator ψ\psi as well as on the value ℓ(1,1)∈[1,2]\ell \left(1,1)\in \left[1,2], and both lower- and upper-tail dependence can be present or not. To this end, we recall that a real function f:R→Rf:{\mathbb{R}}\to {\mathbb{R}}is called regularly varying at some point ∗∈[−∞,∞]\ast \in \left[-\infty ,\infty ]with index −ρ-\rho for ρ∈(0,∞)\rho \in \left(0,\infty )if limt→∗f(xt)/f(t)=x−ρ{\mathrm{lim}}_{t\to \ast }f\left(xt)\hspace{0.1em}\text{/}\hspace{0.1em}f\left(t)={x}^{-\rho }for each x∈Rx\in {\mathbb{R}}, this property being denoted f∈ℛ−ρ∗f\in {{\mathcal{ {\mathcal R} }}}_{-\rho }^{\ast }. With this terminology, we immediately obtain eψ∈ℛ−ρ0forρ∈(0,∞)⇒limx↘0C∗,ψ,ℓ(x,x)x=ℓ(1,1)−ρ.{e}^{\psi }\in {{\mathcal{ {\mathcal R} }}}_{-\rho }^{0}\hspace{1em}\hspace{0.1em}\text{for}\hspace{0.1em}\hspace{0.33em}\rho \in \left(0,\infty )\hspace{0.33em}\Rightarrow \hspace{0.33em}\mathop{\mathrm{lim}}\limits_{x\searrow 0}\frac{{C}_{\ast ,\psi ,\ell }\left(x,x)}{x}=\ell {\left(1,1)}^{-\rho }.The limit is called lower-tail dependence coefficient and measures the degree of accumulation of mass in the lower-left corner of [0,1]2{\left[0,1]}^{2}. A concrete example is given by the generator ψ(x)=−log(1−exp{−xθ})\psi \left(x)=-\log (1-\exp \left\{-{x}^{\theta }\right\})for some θ≥1\theta \ge 1, for which we observe exp(−ψ)∈ℛ−θ0\exp \left(-\psi )\in {{\mathcal{ {\mathcal R} }}}_{-\theta }^{0}.Similarly, if ψ′{\psi }^{^{\prime} }exists on an interval (T,∞)\left(T,\infty )for some T≥0T\ge 0, we obtain ψ′∈ℛ−ρ∞forρ∈(1,∞)⇒limx↗1C∗,ψ,ℓ(x,x)−2x+11−x=ℓ(1,1)1−ρ.{\psi }^{^{\prime} }\in {{\mathcal{ {\mathcal R} }}}_{-\rho }^{\infty }\hspace{1em}\hspace{0.1em}\text{for}\hspace{0.1em}\hspace{0.33em}\rho \in \left(1,\infty )\hspace{0.33em}\Rightarrow \hspace{0.33em}\mathop{\mathrm{lim}}\limits_{x\nearrow 1}\frac{{C}_{\ast ,\psi ,\ell }\left(x,x)-2x+1}{1-x}=\ell {\left(1,1)}^{1-\rho }.The limit is called upper-tail dependence coefficient and measures the degree of accumulation of mass in the upper-right corner of [0,1]2{\left[0,1]}^{2}. A concrete example is given by the generator ψ(x)=x−θ\psi \left(x)={x}^{-\theta }with θ>0\theta \gt 0, for which we observe ψ′∈ℛ−(1+θ)∞{\psi }^{^{\prime} }\in {{\mathcal{ {\mathcal R} }}}_{-\left(1+\theta )}^{\infty }.6Conclusion and outlookWe have derived a novel simulation algorithm for bivariate Archimax copulas, demonstrating its feasibility by many examples. Furthermore, the algorithm was used to obtain an exact simulation algorithm also for the family of bivariate reciprocal Archimax copulas. A generalization of the algorithm to the multi-dimensional case d≥3d\ge 3remains an interesting open problem, which to date is only solved for the particular case ℓ=‖.‖p\ell ={\Vert .\Vert }_{p}, see [15]. http://www.deepdyve.com/assets/images/DeepDyve-Logo-lg.png Dependence Modeling de Gruyter

About the exact simulation of bivariate (reciprocal) Archimax copulas

Dependence Modeling , Volume 10 (1): 19 – Jan 1, 2022

Loading next page...
 
/lp/de-gruyter/about-the-exact-simulation-of-bivariate-reciprocal-archimax-copulas-oaoukhclqO
Publisher
de Gruyter
Copyright
© 2022 Jan-Frederik Mai, published by De Gruyter
ISSN
2300-2298
eISSN
2300-2298
DOI
10.1515/demo-2022-0102
Publisher site
See Article on Publisher Site

Abstract

1IntroductionWe recall that a bivariate copula is the restriction to [0,1]2{\left[0,1]}^{2}of a joint distribution function of a random vector (X,Y)\left(X,Y)with the property that both XXand YYare uniformly distributed on the interval [0,1]\left[0,1], i.e., FX(x)≔P(X≤x)=x{F}_{X}\left(x):= {\mathbb{P}}\left(X\le x)=xand FY(y)≔P(Y≤y)=y{F}_{Y}(y):= {\mathbb{P}}\left(Y\le y)=yfor 0≤x,y≤10\le x,y\le 1. There exists a lot of research on copulas, since the distribution function of an arbitrary bivariate random vector (X,Y)\left(X,Y)can always be written as C(FX(x),FY(y))C\left({F}_{X}\left(x),{F}_{Y}(y))for some copula CCdue to Sklar’s theorem, so that copulas are an analytic tool to separate dependence and marginals. Similarly, the survival function of (X,Y)\left(X,Y)can always be written as Cˆ(1−FX(x),1−FY(y))\hat{C}\left(1-{F}_{X}\left(x),1-{F}_{Y}(y))for some copula Cˆ\hat{C}, called survival copula. If FX,FY{F}_{X},{F}_{Y}are continuous, the copula and survival copula of (X,Y)\left(X,Y)are even unique, see [5,17] for popular textbooks on the topic.The present article deals with the specific family of bivariate Archimax copulas that have been introduced in [1] with the intention to create a large parametric family of copulas with desirable properties in the context of extreme-value theory. For detailed background on dependence properties we also refer to [6], and inference for Archimax copulas is investigated in [3]. Bivariate Archimax copulas are parameterized in terms of a pair (φ,ℓ)\left(\varphi ,\ell )of a continuous and convex function φ:[0,∞)→[0,1]\varphi :{[}0,\infty )\to \left[0,1]with φ(0)=1\varphi \left(0)=1and limx→∞φ(x)=0{\mathrm{lim}}_{x\to \infty }\varphi \left(x)=0, and a norm ℓ\ell on R2{{\mathbb{R}}}^{2}with normalization ℓ(1,0)=ℓ(0,1)=1\ell \left(1,0)=\ell \left(0,1)=1that is orthant–monotonic, which means that ℓ(x,y)=ℓ(∣x∣,∣y∣)\ell \left(x,y)=\ell \left(| x| ,| y| ). Intuitively, this means that its unit circle is axis-symmetric so that its unit simplex {(x,y)∈[0,1]2:ℓ(x,y)=1}\left\{\left(x,y)\in {\left[0,1]}^{2}:\ell \left(x,y)=1\right\}lies between the ones corresponding to ‖.‖1{\Vert .\Vert }_{1}and ‖.‖∞{\Vert .\Vert }_{\infty }. It is well-known from [1,2] that there exists a random vector (X,Y)\left(X,Y)on the positive orthant (0,∞)2{\left(0,\infty )}^{2}whose survival function is given by P(X>x,Y>y)=φ∘ℓ(x,y){\mathbb{P}}\left(X\gt x,Y\gt y)=\varphi \circ \ell \left(x,y), x,y≥0x,y\ge 0. The unique survival copula of (X,Y)\left(X,Y)is called Archimax copula, given by (1)Cφ,ℓ(x,y)=φ∘ℓ[φ−1(x),φ−1(y)],0≤x,y≤1.{C}_{\varphi ,\ell }\left(x,y)=\varphi \circ \ell {[}{\varphi }^{-1}\left(x),{\varphi }^{-1}(y)],\hspace{1.0em}0\le x,y\le 1.In Section 5, we will further provide an exact simulation algorithm for reciprocal Archimax copulas, which have the algebraic form (2)C∗,ψ,ℓ(x,y)=xyCe−ψ,ℓ(x,y),0≤x,y≤1,{C}_{\ast ,\psi ,\ell }\left(x,y)=\frac{x\hspace{0.33em}y}{{C}_{{e}^{-\psi },\ell }\left(x,y)},\hspace{1.0em}0\le x,y\le 1,where ℓ\ell is the same as above, and ψ:(0,∞)→[0,∞)\psi :\left(0,\infty )\to {[}0,\infty )is a convex function with limx→∞ψ(x)=0{\mathrm{lim}}_{x\to \infty }\psi \left(x)=0(just like φ\varphi ) and limx↘0ψ(x)=∞{\mathrm{lim}}_{x\searrow 0}\psi \left(x)=\infty (unlike φ\varphi ). The nomenclature originates from the algebraic reminiscence with Archimax copulas in (1) and was introduced in [7] for the special case ℓ=‖.‖1\ell ={\Vert .\Vert }_{1}. These arise as copulas of bivariate max-infinitely divisible random vectors whose exponent measure has ℓ\ell -symmetric survival function.The function φ\varphi is called Archimedean generator, since the subfamily of copulas of the form Cφ,ℓ=‖.‖1{C}_{\varphi ,\ell ={\Vert .\Vert }_{1}}coincides with Archimedean copulas, see [16] for background on the matter. Similarly, ψ\psi will be called a generator for the reciprocal Archimax copula, since the special case C∗,ψ,ℓ=‖.‖1{C}_{\ast ,\psi ,\ell ={\Vert .\Vert }_{1}}coincides with bivariate reciprocal Archimedean copulas introduced in [7]. The norm ℓ\ell is called a stable tail dependence function, and the function Aℓ(x)≔ℓ(x,1−x){A}_{\ell }\left(x):= \ell \left(x,1-x)for x∈[0,1]x\in \left[0,1]is called Pickands dependence function. (Note that ℓ\ell is fully determined by Aℓ{A}_{\ell }via ℓ(x,y)=(x+y)Aℓxx+y\ell \left(x,y)=\left(x+y){A}_{\ell }\left(\frac{x}{x+y}\right).) The nomenclature originates from the field of multivariate extreme-value theory, since the subfamily of copulas of the form Cφ(x)=exp(−x),ℓ{C}_{\varphi \left(x)=\exp \left(-x),\ell }coincides with bivariate extreme-value copulas, see [8] for background on the matter.Regarding the strength of association between the components XXand YYthere are two different regimes. If the function φ\varphi happens to be the Laplace transform of a positive random variable MM, we have non-negative association, while we can have negative association in the complementary case, when φ\varphi is a so-called strict Archimedean generator (not a Laplace transform). The case of positive association is significantly better understood, because in this case the survival function φ∘ℓ(x,y)=E[exp{−Mℓ(x,y)}]\varphi \circ \ell \left(x,y)={\mathbb{E}}\left[\exp \left\{-M\ell \left(x,y)\right\}]equals that of a scale mixture of a bivariate min-stable exponential distribution, which is a well-investigated concept of positive association, see [9, Chapter 6, p. 169 ff]. In particular, for random vectors with survival function exp(−mℓ(x,y))\exp \left(-m\ell \left(x,y)), or equivalently for extreme-value copulas Cφ(x)=exp(−x),ℓ{C}_{\varphi \left(x)=\exp \left(-x),\ell }, there exist exact simulation algorithms due to [4], which are feasible for many ℓ\ell . Our intention is to provide an algorithm that is equally general with respect to the choice of ℓ\ell , but is not restricted to Laplace transforms φ(x)=E[exp(−Mx)]\varphi \left(x)={\mathbb{E}}\left[\exp \left(-Mx)], but works also for strict generators φ\varphi , in particular for the important special case φ(x)=(1−x)+\varphi \left(x)={\left(1-x)}_{+}.We recap the existing literature that forms the basis for our derivation. On the one hand, there is the following decomposition result due to [2].Theorem 1.1(Charpentier et al. [2]) Let ℓ\ell be an orthant–monotonic norm on R2{{\mathbb{R}}}^{2}subject to the normalization ℓ(1,0)=ℓ(0,1)=1\ell \left(1,0)=\ell \left(0,1)=1, and let φ:[0,∞)→[0,1]\varphi :{[}0,\infty )\to \left[0,1]be a continuous and convex function with φ(0)=1\varphi \left(0)=1and limx→∞φ(x)=0{\mathrm{lim}}_{x\to \infty }\varphi \left(x)=0. Then there exists a random vector (Xℓ,Yℓ)\left({X}_{\ell },{Y}_{\ell })with survival function P(Xℓ>x,Yℓ>y)=(1−ℓ(x,y))+{\mathbb{P}}\left({X}_{\ell }\gt x,{Y}_{\ell }\gt y)={(1-\ell \left(x,y))}_{+}and an independent positive random variable Rφ{R}_{\varphi }with the property E[(1−x/Rφ)+]=φ(x){\mathbb{E}}\left[{\left(1-x\text{/}{R}_{\varphi })}_{+}]=\varphi \left(x), whose distribution is uniquely determined by the function φ\varphi . Furthermore, the random vector (X,Y)≔Rφ(Xℓ,Yℓ)\left(X,Y):= {R}_{\varphi }\left({X}_{\ell },{Y}_{\ell })has survival function P(X>x,Y>y)=φ∘ℓ(x,y){\mathbb{P}}\left(X\gt x,Y\gt y)=\varphi \circ \ell \left(x,y), x,y≥0x,y\ge 0, and survival copula Cφ,ℓ{C}_{\varphi ,\ell }. Thus, the random vector (φ(RφXℓ),φ(RφYℓ))(\varphi \left({R}_{\varphi }{X}_{\ell }),\varphi \left({R}_{\varphi }{Y}_{\ell }))has distribution function Cφ,ℓ{C}_{\varphi ,\ell }.In words, Theorem 1.1 states that every Archimax copula arises as scale mixture of the special Archimax copula with generator φ(x)=(1−x)+\varphi \left(x)={\left(1-x)}_{+}, which equals the Archimax copula with minimal association for a fixed ℓ\ell . When simulating (X,Y)\left(X,Y)with survival copula Cφ,ℓ{C}_{\varphi ,\ell }, this allows us to perform two independent subsequent simulation steps: first simulation of Rφ{R}_{\varphi }and second simulation of (Xℓ,Yℓ)\left({X}_{\ell },{Y}_{\ell })with survival copula given in terms of ℓ\ell by Cℓ(x,y)≔Cφ(x)=(1−x)+,ℓ(x,y)=(1−ℓ(1−x,1−y))+.{C}_{\ell }\left(x,y):= {C}_{\varphi \left(x)={\left(1-x)}_{+},\ell }\left(x,y)={(1-\ell \left(1-x,1-y))}_{+}.On the other hand, in the original reference [1] the authors propose an exact simulation algorithm, which we are going to refine. According to Theorem 1.1, we may focus on the simulation of (Xℓ,Yℓ)\left({X}_{\ell },{Y}_{\ell }), which corresponds to the special case φ(x)=(1−x)+\varphi \left(x)={\left(1-x)}_{+}, and in which the method of [1] can be formulated as in the following theorem.Theorem 1.2(Capéraà et al. [1]) Assume that the second derivative Aℓ″{A}_{\ell }^{^{\prime\prime} }of Aℓ{A}_{\ell }exists and is continuous everywhere on (0,1)\left(0,1). Let U,VU,Vand Zℓ{Z}_{\ell }be three independent random variables, where U,V∼U[0,1]U,V\hspace{0.33em} \sim \hspace{0.33em}{\mathcal{U}}\left[0,1]andP(Zℓ≤z)=z+z(1−z)Aℓ′(z)Aℓ(z),z∈[0,1].{\mathbb{P}}\left({Z}_{\ell }\le z)=z+z\left(1-z)\frac{{A}_{\ell }^{^{\prime} }\left(z)}{{A}_{\ell }\left(z)},\hspace{1.0em}z\in \left[0,1].Further denote the density of Zℓ{Z}_{\ell }by fZℓ{f}_{{Z}_{\ell }}(it exists by the assumption of existing Aℓ″{A}_{\ell }^{^{\prime\prime} }) and define the threshold functionpℓ(z)≔z(1−z)Aℓ″(z)fZℓ(z)Aℓ(z),z∈(0,1).{p}_{\ell }\left(z):= \frac{z\left(1-z)\hspace{0.33em}{A}_{\ell }^{^{\prime\prime} }\left(z)}{{f}_{{Z}_{\ell }}\left(z){A}_{\ell }\left(z)},\hspace{1.0em}z\in \left(0,1).Under these assumptions, the random vector (Xℓ,Yℓ)\left({X}_{\ell },{Y}_{\ell })from Theorem 1.1 admits the stochastic representation(3)(Xℓ,Yℓ)∼ZℓAℓ(Zℓ),1−ZℓAℓ(Zℓ),ifU>pℓ(Zℓ)VZℓAℓ(Zℓ),1−ZℓAℓ(Zℓ),ifU≤pℓ(Zℓ).\left({X}_{\ell },{Y}_{\ell })\hspace{0.33em} \sim \hspace{0.33em}\left\{\begin{array}{ll}\left(\frac{{Z}_{\ell }}{{A}_{\ell }\left({Z}_{\ell })},\frac{1-{Z}_{\ell }}{{A}_{\ell }\left({Z}_{\ell })}\right),& {if}\hspace{0.33em}U\gt {p}_{\ell }\left({Z}_{\ell })\\ V\left(\frac{{Z}_{\ell }}{{A}_{\ell }\left({Z}_{\ell })},\frac{1-{Z}_{\ell }}{{A}_{\ell }\left({Z}_{\ell })}\right),& {if}\hspace{0.33em}U\le {p}_{\ell }\left({Z}_{\ell }).\end{array}\right.Our proposed algorithm below provides concrete implementation advice for simulation from Cℓ{C}_{\ell }. For the simulation of Cℓ{C}_{\ell }we use an alternative approach than Theorem 1.2, in particular we do not require Aℓ″{A}_{\ell }^{^{\prime\prime} }to exist, but we do make use of the distributional statement (3) in our derivation. The main idea is to use the well-known representation (4)Aℓ(x)=2E[max{Qx,(1−Q)(1−x)}]=1+∫0x1−2P(Q≤1−q)dq,{A}_{\ell }\left(x)=2{\mathbb{E}}\left[\max \left\{Qx,\left(1-Q)\left(1-x)\right\}]=1+\underset{0}{\overset{x}{\int }}1-2{\mathbb{P}}\left(Q\le 1-q){\rm{d}}q,which defines a one-to-one relationship between the parameterizing norm ℓ\ell and the distribution of a random variable Q∈[0,1]Q\in \left[0,1]with 2E[Q]=12{\mathbb{E}}\left[Q]=1. This representation of ℓ\ell in terms of QQis called Pickands representation in extreme-value theory. The last equality in (4) shows that the right-hand version of the derivative Aℓ′{A}_{\ell }^{^{\prime} }, which is well-defined everywhere by convexity of Aℓ{A}_{\ell }and which we simply denote by Aℓ′{A}_{\ell }^{^{\prime} }throughout, essentially determines the distribution function of QQand, if existing, the second derivative Aℓ″{A}_{\ell }^{^{\prime\prime} }equals the density of 1−Q1-Q. Whereas the analytical treatment of extreme-value copulas in terms of Aℓ{A}_{\ell }is traditional, the article [18] provides some convincing arguments as to why a treatment in terms of the cdf of QQ(i.e., essentially Aℓ′{A}_{\ell }^{^{\prime} }) can sometimes be more convenient. The appearance of Aℓ′{A}_{\ell }^{^{\prime} }and Aℓ″{A}_{\ell }^{^{\prime\prime} }in (3) above thus hints at the possibility to reformulate the simulation algorithm in terms of QQ. In fact, we manage to provide a simulation algorithm for Cℓ{C}_{\ell }that is exact and feasible whenever the following pre-requisites are met:Assumption 1(Exact evaluation of ℓ\ell ) We can evaluate ℓ(y,x)\ell (y,x)exactly and efficiently for arbitrary x,y≥0x,y\ge 0.Assumption 2(Exact evaluation of Aℓ′{A}_{\ell }^{^{\prime} }) We can evaluate Aℓ′(x){A}_{\ell }^{^{\prime} }\left(x)exactly and efficiently for arbitrary x∈(0,1)x\in \left(0,1).Assumption 3(Exact simulation of QQ) The random variable QQcan be simulated exactly and efficiently.Assumptions 1 and 2 are clearly mild, in particular as they are minimal requirements in order to have hope for simulation according to (3) or the inverse Rosenblatt transformation method via partial derivatives to work at all. The decisive question the thoughtful reader should raise at this point is: “Why is it useful to re-formulate the known model (3) in terms of QQ?” To this end, we point out that due to the aforementioned Pickands representation the random variable QQplays a central role in extreme-value theory, in particular the law of QQis feasible very often, as we will demonstrate below in terms of a comprehensive survey of models. For instance, [4] provides two simulation algorithms for extreme-value copulas precisely under Assumption 3. While the algorithms of [4] are restricted to extreme-value copulas but treat the general case of dd-dimensional extreme-value copulas also for d>2d\gt 2, the algorithm we develop is restricted to the bivariate case only but can be leveraged to an algorithm for arbitrary bivariate Archimax copulas. The dd-dimensional generalization remains an interesting open problem, to date only solved for the special case ℓ=‖.‖p\ell ={\Vert .\Vert }_{p}, see [15].The remainder of the article is organized as follows. Section 2 derives our simulation algorithm, while Section 3 demonstrates it by providing examples. Section 4 discusses some interesting aspects of the particular case when ℓ\ell is symmetric, and Section 5 treats the situation of bivariate reciprocal Archimax copulas, in particular indicating that our simulation algorithm for bivariate Archimax copulas immediately implies an exact simulation algorithm for bivariate reciprocal Archimax copulas as well. Section 6 finally concludes.2Exact simulation from Cℓ{C}_{\ell }in terms of QQWe derive an alternative stochastic representation for the random vector (Xℓ,Yℓ)\left({X}_{\ell },{Y}_{\ell })in Theorem 1.1, which serves as convenient basis for simulation under our Assumptions 1–3. One problem with the stochastic representation of Theorem 1.2 is that it requires Aℓ″{A}_{\ell }^{^{\prime\prime} }to exist, which is restrictive. Our method will resolve this problem. The other problem is that the probability law of Zℓ{Z}_{\ell }is rather complicated to simulate in many cases. Our proposed method replaces the requirement to simulate Zℓ{Z}_{\ell }with the requirement to simulate QQfrom the Pickands representation (4), and we demonstrate that this is feasible in many cases.For the sake of clarity, we recall that the survival function of (Xℓ,Yℓ)\left({X}_{\ell },{Y}_{\ell })is given by (1−ℓ(x,y))+=Cℓ(1−x,1−y){(1-\ell \left(x,y))}_{+}\hspace{0.25em}={C}_{\ell }\left(1-x,1-y), and the distribution function of (Xℓ,Yℓ)\left({X}_{\ell },{Y}_{\ell })equals (5)Cˆℓ(x,y)≔x+y−1+Cℓ(1−x,1−y),{\hat{C}}_{\ell }\left(x,y):= x+y-1+{C}_{\ell }\left(1-x,1-y),which is itself a copula (the survival copula of Cℓ{C}_{\ell }) and (1−Xℓ,1−Yℓ)∼Cℓ\left(1-{X}_{\ell },1-{Y}_{\ell })\hspace{0.33em} \sim \hspace{0.33em}{C}_{\ell }. Our proposed method to simulate (Xℓ,Yℓ)\left({X}_{\ell },{Y}_{\ell })requires the exact simulation of an absolutely continuous random variable on (0,1)\left(0,1)whose density hQ{h}_{Q}depends on a realization of QQfrom the Pickands representation (4). As a first auxiliary step, the following lemma states the explicit form of this density. It furthermore provides the basis for an exact and efficient simulation from hQ{h}_{Q}via the so-called rejection–acceptance sampling method, see [12, p. 234–235] for background. The idea of rejection–acceptance sampling for a (possibly complicated) density hhis to find an easy-to-simulate density gg, called comparison density, and a finite constant c>0c\gt 0such that h≤cgh\le cg. Given this, one may iteratively simulate variates Xg∼g{X}_{g}\hspace{0.33em} \sim \hspace{0.33em}gand independent U∼U[0,1]U\hspace{0.33em} \sim \hspace{0.33em}{\mathcal{U}}\left[0,1]until finally U≤h(Xg)/(cg(Xg))U\le h\left({X}_{g})\hspace{0.1em}\text{/}\hspace{0.1em}(cg\left({X}_{g})), and then return the resulting sample Xg{X}_{g}satisfying this condition as a sample from hh.Lemma 2.1(Auxiliary density and associated comparison density) Let ℓ\ell be an orthant–monotonic norm on R2{{\mathbb{R}}}^{2}which is normalized to satisfy ℓ(0,1)=ℓ(1,0)=1\ell \left(0,1)=\ell \left(1,0)=1, and fix q∈[0,1]q\in \left[0,1]. With the positive constantcq≔1−2ℓ1q,11−q,ifq∈(0,1)1,ifq∈{0,1}{c}_{q}:= \left\{\begin{array}{ll}1-\frac{2}{\ell \left(\frac{1}{q},\frac{1}{1-q}\right)},& {if}\hspace{0.33em}q\in \left(0,1)\\ 1,& {if}\hspace{0.33em}q\in \left\{0,1\right\}\end{array}\right.the functionhq(x)≔1cq1−1Aℓ(x)1−2x−x(1−x)Aℓ′(x)Aℓ(x),ifx<1−q1+1Aℓ(x)1−2x−x(1−x)Aℓ′(x)Aℓ(x),ifx≥1−q,x∈(0,1),{h}_{q}\left(x):= \frac{1}{{c}_{q}}\left\{\begin{array}{ll}1-\frac{1}{{A}_{\ell }\left(x)}\left(1-2x-\frac{x\left(1-x){A}_{\ell }^{^{\prime} }\left(x)}{{A}_{\ell }\left(x)}\right),& {if}\hspace{0.33em}x\lt 1-q\\ 1+\frac{1}{{A}_{\ell }\left(x)}\left(1-2x-\frac{x\left(1-x){A}_{\ell }^{^{\prime} }\left(x)}{{A}_{\ell }\left(x)}\right),& {if}\hspace{0.33em}x\ge 1-q\\ \end{array}\right.,\hspace{1.0em}x\in \left(0,1),defines a probability density on (0,1)\left(0,1). Furthermore, we have that cqhq≤dqgq{c}_{q}{h}_{q}\le {d}_{q}{g}_{q}, where gq{g}_{q}is a piece-wise polynomial probability density on (0,1)\left(0,1)that is given by(6)gq(x)≔1dq[1{x<1−q}(1+2x)+1{x≥1−q}(3−2x)],x∈(0,1),{g}_{q}\left(x):= \frac{1}{{d}_{q}}{[}{1}_{\left\{x\lt 1-q\right\}}\left(1+2x)+{1}_{\left\{x\ge 1-q\right\}}\left(3-2x)],\hspace{1.0em}x\in \left(0,1),with normalizing constant dq≔2(1−q(1−q))>0{d}_{q}:= 2\left(1-q\left(1-q))\gt 0.ProofWe observe that the function Hq(x)≔1cqx−x(1−x)Aℓ(x),ifx<1−qx+x(1−x)Aℓ(x)−2(1−q)qAℓ(1−q),ifx≥1−q,{H}_{q}\left(x):= \frac{1}{{c}_{q}}\left\{\begin{array}{ll}x-\frac{x\left(1-x)}{{A}_{\ell }\left(x)},& \hspace{0.1em}\text{if}\hspace{0.1em}\hspace{0.33em}x\lt 1-q\\ x+\frac{x\left(1-x)}{{A}_{\ell }\left(x)}-2\frac{\left(1-q)q}{{A}_{\ell }\left(1-q)},& \hspace{0.1em}\text{if}\hspace{0.1em}\hspace{0.33em}x\ge 1-q,\\ \end{array}\right.is a continuous distribution function on [0,1]\left[0,1]for arbitrary q∈[0,1]q\in \left[0,1]with Hq(0)=0{H}_{q}\left(0)=0and Hq(1)=1{H}_{q}\left(1)=1. To verify this, we need to check that x±x(1−x)/Aℓ(x)x\pm x\left(1-x)\hspace{0.1em}\text{/}\hspace{0.1em}{A}_{\ell }\left(x)is non-decreasing, which follows from the identity ∂∂xx±x(1−x)Aℓ(x)=1±P(Zℓ>x)−xAℓ(x),\frac{\partial }{\partial x}\left(x\pm \frac{x\left(1-x)}{{A}_{\ell }\left(x)}\right)=1\pm \frac{{\mathbb{P}}\left({Z}_{\ell }\gt x)-x}{{A}_{\ell }\left(x)},and the bounds P(Zℓ>x)∈[0,1]{\mathbb{P}}\left({Z}_{\ell }\gt x)\in \left[0,1]as well as Aℓ(x)≥max{x,1−x}{A}_{\ell }\left(x)\ge \max \left\{x,1-x\right\}. We further observe easily with similar estimates, using again the similarity of the definition of hq(x){h}_{q}\left(x)and P(Zℓ≤x){\mathbb{P}}\left({Z}_{\ell }\le x), that the density hq=Hq′{h}_{q}={H}_{q}^{^{\prime} }of Hq{H}_{q}is bounded from above by dqgq/cq{d}_{q}{g}_{q}\hspace{0.1em}\text{/}\hspace{0.1em}{c}_{q}for gq{g}_{q}given in (6).□A random variable Xgq{X}_{{g}_{q}}with density gq{g}_{q}can be simulated via the inversion method, see [12, p. 234–235] for background, since its cumulative distribution function Gq(x)≔P(Xgq≤x)=∫0xgq(y)dy,x∈[0,1],{G}_{q}\left(x):= {\mathbb{P}}\left({X}_{{g}_{q}}\le x)=\underset{0}{\overset{x}{\int }}{g}_{q}(y){\rm{d}}y,\hspace{1.0em}x\in \left[0,1],can be inverted in closed form according to Gq−1(v)=−12+121+8(1−q(1−q))v,ifv<(1−q)(2−q)2(1−q(1−q))32−129−8((1−q(1−q))v+q(1−q)),else,{G}_{q}^{-1}\left(v)=\left\{\begin{array}{ll}-\frac{1}{2}+\frac{1}{2}\sqrt{1+8\left(1-q\left(1-q))v},& \hspace{0.1em}\text{if}\hspace{0.1em}\hspace{0.33em}v\lt \frac{\left(1-q)\left(2-q)}{2\left(1-q\left(1-q))}\\ \frac{3}{2}-\frac{1}{2}\sqrt{9-8\left(\left(1-q\left(1-q))v+q\left(1-q))},& \hspace{0.1em}\text{else}\hspace{0.1em},\end{array}\right.for v∈(0,1)v\in \left(0,1). Consequently, Lemma 2.1 implies that a random variable Xhq{X}_{{h}_{q}}with density hq{h}_{q}can be simulated exactly via the rejection–acceptance method under Assumptions 1 and 2. Algorithm 1 summarizes the resulting simulation method for Xhq∼hq{X}_{{h}_{q}}\hspace{0.33em} \sim \hspace{0.33em}{h}_{q}based on this idea.Algorithm 1. Simulation of Xhq∼hq{X}_{{h}_{q}}\hspace{0.33em} \sim \hspace{0.33em}{h}_{q}via rejection–acceptance1:procedure SimulateH(qq)2:U←1\hspace{1.0em}U\leftarrow 1and T←0T\leftarrow 03:\hspace{1.0em}while U>TU\gt Tdo4:\hspace{2.0em}Simulate U,VU,Vuniform on [0,1]\left[0,1]5:\hspace{2.0em}if V<(1−q)(2−q)2(1−q(1−q))V\lt \frac{\left(1-q)\left(2-q)}{2\left(1-q\left(1-q))}then6:Xgq←−12+121+8(1−q(1−q))V\hspace{2.0em}\hspace{1.0em}{X}_{{g}_{q}}\leftarrow -\frac{1}{2}+\frac{1}{2}\sqrt{1+8\left(1-q\left(1-q))V}7:\hspace{2.0em}else8:Xgq←32−129−8((1−q(1−q))V+q(1−q))\hspace{2.0em}\hspace{1.0em}{X}_{{g}_{q}}\leftarrow \frac{3}{2}-\frac{1}{2}\sqrt{9-8\left(\left(1-q\left(1-q))\hspace{0.33em}V+q\left(1-q))}9:\hspace{2.0em}end if10:⊳Xgq\hspace{2.0em}\hspace{2.0em}\hspace{2.0em}&#x22B3;\hspace{1em}{X}_{{g}_{q}}has comparison density gq{g}_{q}given in (6)11:\hspace{2.0em}if Xgq<1−q{X}_{{g}_{q}}\lt 1-qthen12:T←11+2Xgq1−1Aℓ(Xgq)1−2Xgq−Xgq(1−Xgq)Aℓ′(Xgq)Aℓ(Xgq)\hspace{2.0em}\hspace{1.0em}T\leftarrow \frac{1}{1+2{X}_{{g}_{q}}}\left[1-\frac{1}{{A}_{\ell }\left({X}_{{g}_{q}})}\left(1-2{X}_{{g}_{q}}-\frac{{X}_{{g}_{q}}\left(1-{X}_{{g}_{q}}){A}_{\ell }^{^{\prime} }\left({X}_{{g}_{q}})}{{A}_{\ell }\left({X}_{{g}_{q}})}\right)\right]13:\hspace{2.0em}else14:T←13−2Xgq1+1Aℓ(Xgq)1−2Xgq−Xgq(1−Xgq)Aℓ′(Xgq)Aℓ(Xgq)\hspace{2.0em}\hspace{1.0em}T\leftarrow \frac{1}{3-2{X}_{{g}_{q}}}\left[1+\frac{1}{{A}_{\ell }\left({X}_{{g}_{q}})}\left(1-2{X}_{{g}_{q}}-\frac{{X}_{{g}_{q}}\left(1-{X}_{{g}_{q}}){A}_{\ell }^{^{\prime} }\left({X}_{{g}_{q}})}{{A}_{\ell }\left({X}_{{g}_{q}})}\right)\right]15:\hspace{2.0em}end if16:\hspace{1.0em}end while17:\hspace{1.0em}return Xhq←Xgq{X}_{{h}_{q}}\leftarrow {X}_{{g}_{q}}18:end procedureBased on the availability of an efficient simulation of Xhq{X}_{{h}_{q}}via Algorithm 1, the following theorem is the main result of the present article.Theorem 2.2(Exact simulation of Cℓ{C}_{\ell }) Let ℓ\ell be an orthant–monotonic norm on R2{{\mathbb{R}}}^{2}subject to the normalization ℓ(0,1)=ℓ(1,0)=1\ell \left(0,1)=\ell \left(1,0)=1. Let V∼U[0,1]V\hspace{0.33em} \sim \hspace{0.33em}{\mathcal{U}}\left[0,1]and QQfrom the Pickands representation (4) of Aℓ{A}_{\ell }be independent random variables. Given QQ(and independent of VV), let XhQ{X}_{{h}_{Q}}be a random variable with density hQ{h}_{Q}, simulated according to Algorithm 1. The random vector (Xℓ,Yℓ)\left({X}_{\ell },{Y}_{\ell })with survival function Cℓ(1−x,1−y){C}_{\ell }\left(1-x,1-y)and distribution function Cˆℓ{\hat{C}}_{\ell }has the stochastic representation(7)(Xℓ,Yℓ)∼XhQAℓ(XhQ),1−XhQAℓ(XhQ),ifℓV2Q,V2(1−Q)≥1V2Q,V2(1−Q),ifℓV2Q,V2(1−Q)<1.\left({X}_{\ell },{Y}_{\ell })\hspace{0.33em} \sim \hspace{0.33em}\left\{\begin{array}{ll}\left(\frac{{X}_{{h}_{Q}}}{{A}_{\ell }\left({X}_{{h}_{Q}})},\frac{1-{X}_{{h}_{Q}}}{{A}_{\ell }\left({X}_{{h}_{Q}})}\right),& {if}\hspace{0.33em}\ell \left(\frac{V}{2Q},\frac{V}{2\left(1-Q)}\right)\ge 1\\ \left(\frac{V}{2Q},\frac{V}{2\left(1-Q)}\right),& {if}\hspace{0.33em}\ell \left(\frac{V}{2Q},\frac{V}{2\left(1-Q)}\right)\lt 1.\\ \end{array}\right.ProofIt is sufficient to prove the claim for absolutely continuous QQ. Why? An arbitrary QQ(possibly with atoms) with E[Q]=1/2{\mathbb{E}}\left[Q]=1\hspace{0.1em}\text{/}\hspace{0.1em}2may be approximated arbitrarily well in distribution with a sequence Qn{Q}_{n}of absolutely continuous random variables satisfying E[Qn]=1/2{\mathbb{E}}\left[{Q}_{n}]=1\hspace{0.1em}\text{/}\hspace{0.1em}2. These correspond to a sequence of stable tail dependence functions ℓn{\ell }_{n}that converge point-wise, and this implies that the respective distributional statement also holds in the limit ℓ\ell associated with QQ. Note that this logic relies on the fact that convergence in distribution Qn→Q{Q}_{n}\to Qstands in one-to-one relation with the point-wise convergence ℓn→ℓ{\ell }_{n}\to \ell , resp. Cℓn→Cℓ{C}_{{\ell }_{n}}\to {C}_{\ell }.Under the assumption of absolute continuity of QQ, ℓ\ell is differentiable and it is clear that Cˆℓ{\hat{C}}_{\ell }has a singular component on the ℓ\ell -simplex {(x,y)∈[0,1]2:ℓ(x,y)=1}\left\{\left(x,y)\in {\left[0,1]}^{2}:\ell \left(x,y)=1\right\}, has zero mass outside the simplex and has a density inside the simplex. Consequently, we may compute this density of Cˆℓ{\hat{C}}_{\ell }inside the simplex by taking the partial derivative with respect to both components. To this end, we denote by fv(q1,q2){f}_{v}\left({q}_{1},{q}_{2})the density of the random vector (v/2Q,v/2(1−Q))\left(v\hspace{0.1em}\text{/}2Q,v\text{/}\hspace{0.1em}2\left(1-Q))for arbitrary but fixed v∈(0,1)v\in \left(0,1), and obtain for x,yx,ywith ℓ(x,y)<1\ell \left(x,y)\lt 1∂2∂x∂yCˆℓ(x,y)=−∂2∂x∂yℓ(x,y)=∂2∂x∂y∫01P(max{2Qx,2(1−Q)y}≤v)dv=∫01∂2∂x∂yPv2Q>x,v2(1−Q)>ydv=∫01fv(x,y)dv.\begin{array}{rcl}\frac{{\partial }^{2}}{\partial x\partial y}{\hat{C}}_{\ell }\left(x,y)& =& -\frac{{\partial }^{2}}{\partial x\partial y}\ell \left(x,y)=\frac{{\partial }^{2}}{\partial x\partial y}\underset{0}{\overset{1}{\displaystyle \int }}{\mathbb{P}}(\max \left\{2Qx,2\left(1-Q)y\right\}\le v){\rm{d}}v\\ & =& \underset{0}{\overset{1}{\displaystyle \int }}\frac{{\partial }^{2}}{\partial x\partial y}{\mathbb{P}}\left(\frac{v}{2Q}\gt x,\frac{v}{2\left(1-Q)}\gt y\right){\rm{d}}v=\underset{0}{\overset{1}{\displaystyle \int }}{f}_{v}\left(x,y){\rm{d}}v.\end{array}Denoting by pCˆℓ{p}_{{\hat{C}}_{\ell }}the mass that dCˆℓ{\rm{d}}{\hat{C}}_{\ell }allocates on the ℓ\ell -simplex, by dCˆℓs{\rm{d}}{\hat{C}}_{\ell }^{s}its singular and by dCˆℓc{\rm{d}}{\hat{C}}_{\ell }^{c}its continuous part, we have the decomposition Cˆℓ=(1−pCˆℓ)Cˆℓc+pCˆℓCˆℓs{\hat{C}}_{\ell }=\left(1-{p}_{{\hat{C}}_{\ell }}){\hat{C}}_{\ell }^{c}+{p}_{{\hat{C}}_{\ell }}{\hat{C}}_{\ell }^{s}. We obtain for x,y∈[0,1]x,y\in \left[0,1]from the above expression (1−pCˆℓ)Cˆℓc(x,y)=∫0x∫0y∫01fv(s,t)dv1{ℓ(s,t)<1}dsdt=E∫011{v2Q≤x,v2(1−Q)≤y}1{ℓv2Q,v2(1−Q)<1}dv=PV2Q≤x,V2(1−Q)≤y,ℓV2Q,V2(1−Q)<1.\begin{array}{rcl}\left(1-{p}_{{\hat{C}}_{\ell }}){\hat{C}}_{\ell }^{c}\left(x,y)& =& \underset{0}{\overset{x}{\displaystyle \int }}\underset{0}{\overset{y}{\displaystyle \int }}\underset{0}{\overset{1}{\displaystyle \int }}{f}_{v}\left(s,t){\rm{d}}v{1}_{\left\{\ell \left(s,t)\lt 1\right\}}{\rm{d}}s{\rm{d}}t\\ & =& {\mathbb{E}}\left[\underset{0}{\overset{1}{\displaystyle \int }}{1}_{\left\{\tfrac{v}{2Q}\le x,\tfrac{v}{2\left(1-Q)}\le y\right\}}{1}_{\left\{\ell \left(\tfrac{v}{2Q},\tfrac{v}{2\left(1-Q)}\right)\lt 1\right\}}{\rm{d}}v\right]\\ & =& {\mathbb{P}}\left(\frac{V}{2Q}\le x,\frac{V}{2\left(1-Q)}\le y,\ell \left(\frac{V}{2Q},\frac{V}{2\left(1-Q)}\right)\lt 1\right).\end{array}Furthermore, we obtain the probability 1−pCˆℓ1-{p}_{{\hat{C}}_{\ell }}by plugging in x=y=1x=y=1, which yields 1−pCˆℓ=PV<min2Q,2(1−Q),1ℓ12Q,12(1−Q)=PV<1ℓ12Q,12(1−Q)=PℓV2Q,V2(1−Q)<1=E1ℓ12Q,12(1−Q).\begin{array}{rcl}1-{p}_{{\hat{C}}_{\ell }}& =& {\mathbb{P}}\left(V\lt \min \left\{2Q,2\left(1-Q),\frac{1}{\ell \left(\frac{1}{2Q},\frac{1}{2\left(1-Q)}\right)}\right\}\right)={\mathbb{P}}\left(V\lt \frac{1}{\ell \left(\frac{1}{2Q},\frac{1}{2\left(1-Q)}\right)}\right)\\ & =& {\mathbb{P}}\left[\ell \left(\frac{V}{2Q},\frac{V}{2\left(1-Q)}\right)\lt 1\right]={\mathbb{E}}\left[\frac{1}{\ell \left(\frac{1}{2Q},\frac{1}{2\left(1-Q)}\right)}\right].\end{array}Note that the last equality is stated only for later reference, and implicitly these computations rely on the fact that 1≥min{2Q,2(1−Q)}≥1/ℓ12Q,12(1−Q)1\ge \min \left\{2Q,2\left(1-Q)\right\}\ge 1\hspace{0.1em}\text{/}\hspace{0.1em}\ell \left(\frac{1}{2Q},\frac{1}{2\left(1-Q)}\right)almost surely. This already proves the first part of our claimed statement.For the second part, from Theorem 1.2 we obtain that Cˆℓs(x,y)=PZℓℓ(Zℓ,1−Zℓ)≤x,1−Zℓℓ(Zℓ,1−Zℓ)≤yU>pℓ(Zℓ).{\hat{C}}_{\ell }^{s}\left(x,y)={\mathbb{P}}\left(\left.\frac{{Z}_{\ell }}{\ell \left({Z}_{\ell },1-{Z}_{\ell })}\le x,\hspace{1em}\frac{1-{Z}_{\ell }}{\ell \left({Z}_{\ell },1-{Z}_{\ell })}\le y\right|\hspace{0.33em}U\gt {p}_{\ell }\left({Z}_{\ell })\right).Using the well-known identity Aℓ′(z)=1−2P(Q≤1−z){A}_{\ell }^{^{\prime} }\left(z)=1-2{\mathbb{P}}\left(Q\le 1-z)from (4) and the definition of pℓ{p}_{\ell }and P(Zℓ≤z){\mathbb{P}}\left({Z}_{\ell }\le z)in Theorem 1.2, we find E[1−pℓ(Zℓ)]=E1−1ℓ12Q,12(1−Q)︸=cQ,see Lemma 2.1=PℓV2Q,V2(1−Q)>1=pCˆℓ,{\mathbb{E}}\left[1-{p}_{\ell }\left({Z}_{\ell })]={\mathbb{E}}\left[\mathop{\underbrace{1-\frac{1}{\ell \left(\frac{1}{2Q},\frac{1}{2\left(1-Q)}\right)}}}\limits_{={c}_{Q},\hspace{0.1em}\text{see Lemma 2.1}\hspace{0.1em}}\right]={\mathbb{P}}\left(\ell \left(\frac{V}{2Q},\frac{V}{2\left(1-Q)}\right)\gt 1\right)={p}_{{\hat{C}}_{\ell }},as well as pCˆℓP(Zℓ≤z∣U>pℓ(Zℓ))=Ez+z(1−z)Aℓ(z)−1ℓ12(1−z),12z,ifz≤1−Q1ℓ12Q,12(1−Q),ifz>1−Q.{p}_{{\hat{C}}_{\ell }}{\mathbb{P}}\left({Z}_{\ell }\le z| U\gt {p}_{\ell }\left({Z}_{\ell }))={\mathbb{E}}\left[z+\frac{z\left(1-z)}{{A}_{\ell }\left(z)}-\left.\left\{\begin{array}{ll}\frac{1}{\ell \left(\frac{1}{2\left(1-z)},\frac{1}{2\hspace{0.33em}z}\right)},& \hspace{0.1em}\text{if}\hspace{0.1em}\hspace{0.33em}z\le 1-Q\\ \frac{1}{\ell \left(\frac{1}{2Q},\frac{1}{2\left(1-Q)}\right)},& \hspace{0.1em}\text{if}\hspace{0.1em}\hspace{0.33em}z\gt 1-Q\\ \end{array}\right.\right\}\right].In the last expression we recognize the distribution function HQ{H}_{Q}from the proof of Lemma 2.1, so that this finally implies P(Zℓ≤z∣U>pℓ(Zℓ))=EcQpCˆℓHQ(z)=PXhQ≤z∣ℓV2Q,V2(1−Q)>1,{\mathbb{P}}\left({Z}_{\ell }\le z\hspace{0.33em}| \hspace{0.33em}U\gt {p}_{\ell }\left({Z}_{\ell }))={\mathbb{E}}\left[\frac{{c}_{Q}}{{p}_{{\hat{C}}_{\ell }}}\hspace{0.33em}{H}_{Q}\left(z)\right]={\mathbb{P}}\left({X}_{{h}_{Q}}\le z| \ell \left(\frac{V}{2Q},\frac{V}{2\left(1-Q)}\right)\gt 1\right),which establishes the claimed statement, when comparing with (3).□As mentioned in Section 1, the simulation of an arbitrary bivariate Archimax copula Cφ,ℓ{C}_{\varphi ,\ell }requires simulation from Cℓ{C}_{\ell }, which we have now established, and, independently, simulation of a random radius Rφ>0{R}_{\varphi }\gt 0, whose probability distribution is determined uniquely by the Williamson-2-transform φ(x)≔E[(1−x/Rφ)+]\varphi \left(x):= {\mathbb{E}}\left[{\left(1-x\text{/}{R}_{\varphi })}_{+}]. We first note that the distribution function of Rφ{R}_{\varphi }is given by the Williamson-inversion formula (8)P(Rφ≤x)=1−φ(x)+xφ′(x),{\mathbb{P}}\left({R}_{\varphi }\le x)=1-\varphi \left(x)+x{\varphi }^{^{\prime} }\left(x),where φ′{\varphi }^{^{\prime} }is meant to be the right-hand version of the (by convexity almost everywhere existing) derivative of φ\varphi . Regarding examples for feasible Rφ{R}_{\varphi }, we refer to [16]. Generally speaking, rejection–acceptance sampling of Rφ{R}_{\varphi }based on (8) is feasible whenever φ″{\varphi }^{^{\prime\prime} }exists and is analytically feasible and if the (then existing) density fRφ(x)=xφ″(x){f}_{{R}_{\varphi }}\left(x)=x{\varphi }^{^{\prime\prime} }\left(x)can be bounded from above by a scalar multiple of an easy-to-simulate density. In contrast, we end this section by providing a simulation algorithm for the special case (9)φ(x)=1−∫(0,∞](1−e−sx)ν(ds)+,\varphi \left(x)={\left(1-\mathop{\int }\limits_{(0,\infty ]}(1-{e}^{-sx})\nu \left({\rm{d}}s)\right)}_{+},where (1−e−s)ν(ds)(1-{e}^{-s})\nu \left({\rm{d}}s)is a probability measure on (0,∞](0,\infty ], under the following assumption:Assumption 4(Requirement to simulate Rφ{R}_{\varphi }with generator (9)) The probability measure ρν(ds)≔(1−e−s)ν(ds){\rho }_{\nu }\left({\rm{d}}s):= (1-{e}^{-s})\nu \left({\rm{d}}s)can be simulated exactly.A generator of the form (9) always implies Rφ≤1{R}_{\varphi }\le 1. There are examples for generators φ\varphi of that form which are not analytically tractable on first glimpse, in particular for which it is not straightforward to apply direct acceptance-rejection sampling based on the density fRφ{f}_{{R}_{\varphi }}, an example is provided below.The function Ψν(x)≔∫(0,∞](1−e−xs)ν(ds),x≥0,{\Psi }_{\nu }\left(x):= \mathop{\int }\limits_{(0,\infty ]}(1-{e}^{-xs})\nu \left({\rm{d}}s),\hspace{1.0em}x\ge 0,is a so-called Bernstein function satisfying Ψν(1)=1{\Psi }_{\nu }\left(1)=1, in particular is concave, so that φ(x)≔(1−Ψν(x))+\varphi \left(x):= {(1-{\Psi }_{\nu }\left(x))}_{+}equals the Williamson-2-transform of some random variable taking values in (0,1](0,1], i.e., φ\varphi is a proper Archimedean generator. Every Rφ{R}_{\varphi }that arises in this way can be simulated under Assumption 4. Why? We observe that P(Rφ≤x)=Ψν(x)−xΨν′(x)=∫(0,∞]1−e−xs(1+xs)1−e−s︸≕Gs(x)ρν(ds).{\mathbb{P}}\left({R}_{\varphi }\le x)={\Psi }_{\nu }\left(x)-x{\Psi }_{\nu }^{^{\prime} }\left(x)=\mathop{\int }\limits_{(0,\infty ]}\mathop{\underbrace{\frac{1-{e}^{-xs}\left(1+x\hspace{0.33em}s)}{1-{e}^{-s}}}}\limits_{=: {G}_{s}\left(x)}{\rho }_{\nu }\left({\rm{d}}s).For fixed s∈(0,∞]s\in (0,\infty ], the function Gs:[0,1]→[0,1]{G}_{s}:\left[0,1]\to \left[0,1]is the distribution function of a random variable Rs{R}_{s}on [0,1]\left[0,1]. Furthermore, simulation of Rs∼Gs{R}_{s}\hspace{0.33em} \sim \hspace{0.33em}{G}_{s}is efficient and exact via acceptance-rejection, since Rs{R}_{s}takes values in (0,1](0,1], has an atom at 1 with probability exp{−s}s/(1−exp{−s})\exp \left\{-s\right\}s\hspace{0.1em}\text{/}\hspace{0.1em}\left(1-\exp \left\{-s\right\})and a density Gs′{G}_{s}^{^{\prime} }on (0,1)\left(0,1)that is bounded from above by x↦xs2/(1−exp{−s})x\mapsto x{s}^{2}\hspace{0.1em}\text{/}\hspace{0.1em}\left(1-\exp \left\{-s\right\}). Summing up, in order to simulate Rφ{R}_{\varphi }, the following steps need to be carried out: Simulate S∼ρνS\hspace{0.33em} \sim \hspace{0.33em}{\rho }_{\nu }, using Assumption 4.Simulate B∼Bernoulli(exp{−S}S/(1−exp{−S}))B\hspace{0.33em} \sim \hspace{0.33em}\hspace{0.1em}\text{Bernoulli}\hspace{0.1em}(\exp \left\{-S\right\}S\hspace{0.1em}\text{/}\hspace{0.1em}\left(1-\exp \left\{-S\right\})).If B=1B=1, return Rφ=1{R}_{\varphi }=1.Otherwise, repeatedly simulate U,V∼U[0,1]U,V\hspace{0.33em} \sim \hspace{0.33em}{\mathcal{U}}\left[0,1]independent, until finally U≤e−VSU\le {e}^{-\sqrt{V}S}, and then return Rφ=V{R}_{\varphi }=\sqrt{V}.There exist some examples for closed-form analytic functions Ψν{\Psi }_{\nu }for which Ψν″{\Psi }_{\nu }^{^{\prime\prime} }(hence the resulting density fRφ{f}_{{R}_{\varphi }}) is nasty to deal with, but for which Assumption 4 is still satisfied. For instance, we may extract the following example from [13, Example 3.3]. With parameters α∈[0,1)\alpha \in {[}0,1)and θ≥0\theta \ge 0we consider the Bernstein function Ψν(x)=xΓ(x+θ)Γ(2+θ−α)Γ(1+θ)Γ(x+θ+1−α),x≥0,{\Psi }_{\nu }\left(x)=x\frac{\Gamma \left(x+\theta )\Gamma \left(2+\theta -\alpha )}{\Gamma \left(1+\theta )\Gamma \left(x+\theta +1-\alpha )},\hspace{1.0em}x\ge 0,which corresponds to the measure ν(ds)=Γ(2+θ−α)Γ(θ+1)Γ(1−α)e−θs(1−e−s)α+1(αe−s+θ(1−e−s)).\nu \left({\rm{d}}s)=\frac{\Gamma \left(2+\theta -\alpha )}{\Gamma \left(\theta +1)\Gamma \left(1-\alpha )}\frac{{e}^{-\theta s}}{{(1-{e}^{-s})}^{\alpha +1}}(\alpha {e}^{-s}+\theta (1-{e}^{-s})).The associated random variable SSwith probability distribution ρν{\rho }_{\nu }has density given by (1−α)gθ,2−α+αgθ+1,1−α\left(1-\alpha ){g}_{\theta ,2-\alpha }+\alpha {g}_{\theta +1,1-\alpha }, where ga,b{g}_{a,b}denotes the density of −log(Ba,b)-\log \left({B}_{a,b})and Ba,b{B}_{a,b}has Beta distribution with density proportional to x↦xa−1(1−x)b−1x\mapsto {x}^{a-1}{\left(1-x)}^{b-1}. The simulation of SSis thus straightforward as a mixture of logarithms of Beta distributions, see [12, p. 243] for advice on how to simulate Ba,b{B}_{a,b}. Apparently, computing the second derivative of Ψν{\Psi }_{\nu }and developing a rejection–acceptance sampling method for Rφ{R}_{\varphi }in this example appears to be rather inconvenient, whereas the presented method to simulate Rφ{R}_{\varphi }and its associated Archimax copulas only requires to evaluate Ψν{\Psi }_{\nu }(in order to transform margins to uniforms), which is easy.3Examples for feasible ℓ\ell We review some popular families of stable tail dependence functions ℓ\ell , for which the simulation of the associated QQis viable, thereby demonstrating that our algorithm is feasible in many interesting cases.3.1QQcorresponding to conditionally iid extreme-value copulasMany (but not all, see [14]) parametric families of symmetric stable tail dependence functions can be considered jointly under one common umbrella, by observing that they are of the particular form ℓ(x,y)=ℓF(x,y)≔∫0∞1−FuxFuydu,\ell \left(x,y)={\ell }_{F}\left(x,y):= \underset{0}{\overset{\infty }{\int }}1-F\left(\frac{u}{x}\right)F\left(\frac{u}{y}\right){\rm{d}}u,where F:[0,∞)→[0,1]F:{[}0,\infty )\to \left[0,1]equals the distribution function of a non-negative random variable XF{X}_{F}with unit mean. In probabilistic terms, this means that ℓF(x,y)=E[max{xXF,1,yXF,2}],x,y≥0,{\ell }_{F}\left(x,y)={\mathbb{E}}\left[\max \left\{x\hspace{0.33em}{X}_{F,1},\hspace{0.33em}y\hspace{0.33em}{X}_{F,2}\right\}],\hspace{1.0em}x,y\ge 0,where XF,1,XF,2∼F{X}_{F,1},{X}_{F,2}\hspace{0.33em} \sim \hspace{0.33em}Fare iid random variables. These particular stable tail dependence functions are studied in [10], where it is shown that a random vector (X,Y)\left(X,Y)with survival function exp(−ℓF(x,y))\exp \left(-{\ell }_{F}\left(x,y))is conditionally iid. The probability distribution of Q=QFQ={Q}_{F}in this case can be simulated by (10)Q∼1{B=0}YFYF+XF,2+1{B=1}XF,1XF,1+YF,Q\hspace{0.33em} \sim \hspace{0.33em}{1}_{\left\{B=0\right\}}\frac{{Y}_{F}}{{Y}_{F}+{X}_{F,2}}+{1}_{\left\{B=1\right\}}\frac{{X}_{F,1}}{{X}_{F,1}+{Y}_{F}},where XF,1,XF,2∼F{X}_{F,1},{X}_{F,2}\hspace{0.33em} \sim \hspace{0.33em}F, B∼B\hspace{0.33em} \sim \hspace{0.33em}Bernoulli(1/2)\left(1\hspace{0.1em}\text{/}\hspace{0.1em}2)and YF∼xdF(x){Y}_{F}\hspace{0.33em} \sim \hspace{0.33em}x{\rm{d}}F\left(x)are independent random variables. For many popular models, the distribution FFas well as its so-called size bias YF{Y}_{F}can be simulated efficiently. We provide some examples below. Note in particular that for both of the following two examples, the Hüsler-Reiss example and the Galambos example, simulation algorithms based on partial derivatives require numerical root search algorithms, while the presented algorithm is exact.Example 3.1(Hüsler-Reiss family) With a parameter θ>0\theta \gt 0we consider the norm ℓ(x,y)=yΦθ+12θlogyx+xΦθ+12θlogxy,x,y≥0,\ell \left(x,y)=y\Phi \left[\theta +\frac{1}{2\theta }\log \left(\frac{y}{x}\right)\right]+x\Phi \left[\theta +\frac{1}{2\theta }\log \left(\frac{x}{y}\right)\right],\hspace{1.0em}x,y\ge 0,where Φ\Phi denotes the cdf of a standard normally distributed random variable. This is of the form ℓF{\ell }_{F}with F(x)=Φ[(log(x)+θ2/2)/θ]F\left(x)=\Phi \left[\left(\log \left(x)+{\theta }^{2}\hspace{0.1em}\text{/}\hspace{0.1em}2)\hspace{0.1em}\text{/}\hspace{0.1em}\theta ]a log-normal distribution. The random variables XF∼F{X}_{F}\hspace{0.33em} \sim \hspace{0.33em}Fand YF∼xdF(x){Y}_{F}\hspace{0.33em} \sim \hspace{0.33em}x{\rm{d}}F\left(x)can be simulated according to XF∼e−θ2+2θW,YF∼eθ2+2θW,W∼Φ.{X}_{F}\hspace{0.33em} \sim \hspace{0.33em}{e}^{-{\theta }^{2}+\sqrt{2}\theta W},\hspace{1.0em}{Y}_{F}\hspace{0.33em} \sim \hspace{0.33em}{e}^{{\theta }^{2}+\sqrt{2}\theta W},\hspace{1.0em}W\hspace{0.33em} \sim \hspace{0.33em}\Phi .The required derivative of Aℓ′{A}_{\ell }^{^{\prime} }is available in closed form, given by Aℓ′(x)=Φθ+12θlogx1−x−Φθ+12θlog1−xx+ϕθ+12θlogx1−x12θ(1−x)−ϕθ+12θlog1−xx12θx,\begin{array}{rcl}{A}_{\ell }^{^{\prime} }\left(x)& =& \Phi \left[\theta +\frac{1}{2\theta }\log \left(\frac{x}{1-x}\right)\right]-\Phi \left[\theta +\frac{1}{2\theta }\log \left(\frac{1-x}{x}\right)\right]\\ & & +\phi \left[\theta +\frac{1}{2\theta }\log \left(\frac{x}{1-x}\right)\right]\frac{1}{2\theta \left(1-x)}-\phi \left[\theta +\frac{1}{2\theta }\log \left(\frac{1-x}{x}\right)\right]\frac{1}{2\theta x},\end{array}where ϕ(x)=Φ′(x)=exp(−x2/2)/2π\phi \left(x)={\Phi }^{^{\prime} }\left(x)=\exp \left(-{x}^{2}\hspace{0.1em}\text{/}\hspace{0.1em}2)\hspace{0.1em}\text{/}\hspace{0.1em}\sqrt{2\pi }equals the density of WW.Example 3.2(Galambos family) With a parameter θ>0\theta \gt 0we consider the norm ℓ(x,y)=x+y−(x−θ+y−θ)−1θ,x,y≥0.\ell \left(x,y)=x+y-{({x}^{-\theta }+{y}^{-\theta })}^{-\tfrac{1}{\theta }},\hspace{1.0em}x,y\ge 0.This is of the form ℓF{\ell }_{F}with F(x)=1−exp{−[Γ(1+1/θ)x]θ}F\left(x)=1-\exp \left\{-{\left[\Gamma \left(1+1\text{/}\theta )x]}^{\theta }\right\}a Weibull distribution. (Note that in [10, Example 2] there is a typo, mistaking θ\theta for 1/θ1\hspace{0.1em}\text{/}\hspace{0.1em}\theta .) The random variable XF∼F{X}_{F}\hspace{0.33em} \sim \hspace{0.33em}Fcan be simulated exactly via the inversion method and YF∼xdF(x){Y}_{F}\hspace{0.33em} \sim \hspace{0.33em}x{\rm{d}}F\left(x)according to YF∼G1/θΓ(1+1/θ),G∼Γ(1+1/θ,1),{Y}_{F}\hspace{0.33em} \sim \hspace{0.33em}\frac{{G}^{1\text{/}\theta }}{\Gamma \left(1+1\hspace{0.1em}\text{/}\hspace{0.1em}\theta )},\hspace{1.0em}G\hspace{0.33em} \sim \hspace{0.33em}\Gamma \left(1+1\hspace{0.1em}\text{/}\hspace{0.1em}\theta ,1),where Γ(β,η)\Gamma \left(\beta ,\eta )denotes the Gamma distribution with density proportional to x↦e−ηxxβ−1x\mapsto {e}^{-\eta x}{x}^{\beta -1}. The required derivative of Aℓ′{A}_{\ell }^{^{\prime} }is available in closed form, given by Aℓ′(x)=(x−θ+(1−x)−θ)−1θ−1((1−x)−θ−1−x−θ−1).{A}_{\ell }^{^{\prime} }\left(x)={({x}^{-\theta }+{\left(1-x)}^{-\theta })}^{-\tfrac{1}{\theta }-1}({\left(1-x)}^{-\theta -1}-{x}^{-\theta -1}).Example 3.3(Cuadras-Augé family) With a parameter θ∈(0,1]\theta \in (0,1]we consider the norm ℓ(x,y)=max{x,y}+(1−θ)min{x,y}=θ‖(x,y)‖∞+(1−θ)‖(x,y)‖1.\ell \left(x,y)=\max \left\{x,y\right\}+\left(1-\theta )\min \left\{x,y\right\}=\theta {\Vert \left(x,y)\Vert }_{\infty }+\left(1-\theta ){\Vert \left(x,y)\Vert }_{1}.This is of the form ℓF{\ell }_{F}with F(x)=1−θ+θ1{θx≥1}F\left(x)=1-\theta +\theta {1}_{\left\{\theta x\ge 1\right\}}, meaning that P(XF=0)=1−θ=1−P(XF=1/θ){\mathbb{P}}\left({X}_{F}=0)=1-\theta =1-{\mathbb{P}}\left({X}_{F}=1\hspace{0.1em}\text{/}\hspace{0.1em}\theta ). Furthermore, the associated size bias YF=1/θ{Y}_{F}=1\hspace{0.1em}\text{/}\hspace{0.1em}\theta equals a constant in this case. The required derivative Aℓ′{A}_{\ell }^{^{\prime} }is piece-wise constant, taking the value −θ-\theta on (0,1/2)\left(0,1\hspace{0.1em}\text{/}\hspace{0.1em}2)and the value θ\theta on [1/2,1){[}1\hspace{0.1em}\text{/}\hspace{0.1em}2,1). The case of general piece-wise constant Aℓ′{A}_{\ell }^{^{\prime} }is further discussed in Section 3.2.Example 3.4(Gumbel family ℓ=‖.‖p\ell ={\Vert .\Vert }_{p}) When ℓ(x,y)=‖(x,y)‖p=(xp+yp)1/p\ell \left(x,y)={\Vert \left(x,y)\Vert }_{p}={({x}^{p}+{y}^{p})}^{1\text{/}p}for some p>1p\gt 1, then ℓ=ℓF\ell ={\ell }_{F}with associated Fréchet distribution function F(x)=exp{−[Γ(1−1/p)x]−p}F\left(x)=\exp \left\{-{\left[\Gamma \left(1-1\text{/}p)x]}^{-p}\right\}. In particular, XF∼F{X}_{F}\hspace{0.33em} \sim \hspace{0.33em}Fmay be simulated exactly via the inversion method, and the size bias YF{Y}_{F}may be simulated according to YF∼G−1/pΓ(1−1/p),G∼Γ(1−1/p,1),{Y}_{F}\hspace{0.33em} \sim \hspace{0.33em}\frac{{G}^{-1\text{/}p}}{\Gamma \left(1-1\hspace{0.1em}\text{/}\hspace{0.1em}p)},\hspace{1.0em}G\hspace{0.33em} \sim \hspace{0.33em}\Gamma \left(1-1\hspace{0.1em}\text{/}\hspace{0.1em}p,1),where Γ(β,η)\Gamma \left(\beta ,\eta )denotes the Gamma distribution with density proportional to x↦e−ηxxβ−1x\mapsto {e}^{-\eta x}{x}^{\beta -1}. The required derivative Aℓ′{A}_{\ell }^{^{\prime} }is known in closed form, to wit Aℓ′(x)=(xp+(1−x)p)−1p−1(xp−1−(1−x)p−1).{A}_{\ell }^{^{\prime} }\left(x)={({x}^{p}+{\left(1-x)}^{p})}^{-\tfrac{1}{p}-1}({x}^{p-1}-{\left(1-x)}^{p-1}).In [13,11] it is further explained how to simulate QQassociated with convex mixtures of stable tail dependence functions of the type ℓF{\ell }_{F}, and how this can be used to construct interesting and tractable new families. The following is an example in this spirit.Example 3.5(Holmgren-Liouville family) Let SSbe some random variable taking values in (0,1)\left(0,1). Then the function Ψ(x)=1−E[(1−S)x]E[S],x≥0,\Psi \left(x)=\frac{1-{\mathbb{E}}{[}{\left(1-S)}^{x}]}{{\mathbb{E}}\left[S]},\hspace{1.0em}x\ge 0,is a so-called Bernstein function. Assume further that SSis either absolutely continuous or discrete. In the absolutely continuous case we denote by fS{f}_{S}its density, and in the discrete case we denote by fS{f}_{S}the function fS(z)=P(S=z){f}_{S}\left(z)={\mathbb{P}}\left(S=z), thus treating both the absolutely continuous case and the discrete case jointly with our notation. Similarly, we consider a random variable ZZwith associated function fZ(z)=1E[1+S/log(1−S)]1z+1log(1−z)zfS(z),z∈(0,1].{f}_{Z}\left(z)=\frac{1}{{\mathbb{E}}\left[1+S/\log \left(1-S)]}\left(\frac{1}{z}+\frac{1}{\log \left(1-z)}\right)z{f}_{S}\left(z),\hspace{1.0em}z\in (0,1].This function is bounded from above by fZ(z)≤2fS(z)E[S],z∈(0,1].{f}_{Z}\left(z)\le \frac{{2f}_{S}\left(z)}{{\mathbb{E}}\left[S]},\hspace{1.0em}z\in (0,1].Consequently, we can simulate ZZexactly via acceptance-rejection, whenever we can simulate SS, see [12, p. 235–237]. Now we consider ℓ(x,y)=2Ψ(2)xyx+y−2xyx+y−max{x,y}Ψ1−min{x,y}max{x,y},\ell \left(x,y)=\frac{2\Psi \left(2)x\hspace{0.33em}y}{x+y}-\left(\frac{2x\hspace{0.33em}y}{x+y}-\max \left\{x,y\right\}\right)\Psi \left(1-\frac{\min \left\{x,y\right\}}{\max \left\{x,y\right\}}\right),which is an orthant–monotonic norm with ℓ(1,0)=ℓ(0,1)=1\ell \left(1,0)=\ell \left(0,1)=1, hence a stable tail dependence function. This norm arises as a convex probability mixture of norms of the form ℓFz{\ell }_{{F}^{z}}with F(x)=min{exp(x−1),1}F\left(x)=\min \left\{\exp \left(x-1),1\right\}, when randomizing the power zz, as demonstrated in [13, Example 3.6]. The associated random variable QQcan be simulated as follows: Simulate ZZaccording to the aforementioned rejection–acceptance method (requires simulation of SS).Simulate YF{Y}_{F}from the continuous density fYF(x)=Z2e−ZZ+e−Z−1xeZx,x∈(0,1),{f}_{{Y}_{F}}\left(x)=\frac{{Z}^{2}{e}^{-Z}}{Z+{e}^{-Z}-1}x{e}^{Zx},\hspace{1.0em}x\in \left(0,1),which can be done via acceptance-rejection with comparison density g(x)=2xg\left(x)=2x.Simulate XF,1,XF,2{X}_{F,1},{X}_{F,2}iid from the distribution function F(x)=min{exp(Zx−Z),1}F\left(x)=\min \left\{\exp \left(Zx-Z),1\right\}, which is straightforward by the inversion method.Apply formula (10) in order to simulate QQfrom the simulated XF,1,XF,2,YF{X}_{F,1},{X}_{F,2},{Y}_{F}and a Bernoulli variable BB.In order to make this example feasible, the choice of SSrequires availability of Ψ\Psi in closed form, as well as an exact simulation method for SS. Examples that work certainly include finite-valued SS, as well as SSwith beta distribution whose density is proportional to x↦xa−1(1−x)b−1x\mapsto {x}^{a-1}{\left(1-x)}^{b-1}for parameters a,b>0a,b\gt 0. In the latter case, we observe that (11)Ψ(x)=a+bb1−β(a,b+x)β(a,b),\Psi \left(x)=\frac{a+b}{b}\left(1-\frac{\beta \left(a,b+x)}{\beta \left(a,b)}\right),where β(.,.)\beta \left(.,.)denotes the beta function. This example originates from a random vector (X,Y)\left(X,Y)with survival function exp(−ℓ(x,y))\exp \left(-\ell \left(x,y))that is defined by (X,Y)=(LϵX−1,LϵY−1)\left(X,Y)=({L}_{{\epsilon }_{X}}^{-1},{L}_{{\epsilon }_{Y}}^{-1}), where ϵX,ϵY{\epsilon }_{X},{\epsilon }_{Y}are iid standard exponential random variables and, independently, L={Lt}t≥0L={\left\{{L}_{t}\right\}}_{t\ge 0}is a so-called Holmgren-Liouville subordinator. The latter is a non-decreasing stochastic process with long-range dependence properties. In particular, (X,Y)\left(X,Y)are iid conditioned on LL. For the sake of completeness, we point out that Aℓ′{A}_{\ell }^{^{\prime} }is explicitly given by Aℓ′(x)=2Ψ(2)(1−2x)−(1−4max{x,1−x})Ψ1−min{x,1−x}max{x,1−x}+1−2max{x,1−x}max{x,1−x}Ψ′1−min{x,1−x}max{x,1−x}.\begin{array}{rcl}{A}_{\ell }^{^{\prime} }\left(x)& =& 2\Psi \left(2)\left(1-2x)-\left[(1-4\max \left\{x,1-x\right\})\Psi \left(1-\frac{\min \left\{x,1-x\right\}}{\max \left\{x,1-x\right\}}\right)\right.\\ & & \left.+\frac{1-2\max \left\{x,1-x\right\}}{\max \left\{x,1-x\right\}}{\Psi }^{^{\prime} }\left(1-\frac{\min \left\{x,1-x\right\}}{\max \left\{x,1-x\right\}}\right)\right].\end{array}In the particular special case with SShaving a beta distribution, the derivative Ψ′{\Psi }^{^{\prime} }of Ψ\Psi in (11) is given in terms of the Digamma function DΓ(x)=Γ′(x)/Γ(x){D}_{\Gamma }\left(x)={\Gamma }^{^{\prime} }\left(x)\hspace{0.1em}\text{/}\hspace{0.1em}\Gamma \left(x), which is pre-implemented in most software packages just like the Gamma function Γ(.)\Gamma \left(.)and Beta function β(.,.)\beta \left(.,.), to wit Ψ′(x)=a+bbβ(a,b+x)β(a,b)(DΓ(a+b+x)−DΓ(b+x)).{\Psi }^{^{\prime} }\left(x)=\frac{a+b}{b}\frac{\beta \left(a,b+x)}{\beta \left(a,b)}({D}_{\Gamma }\left(a+b+x)-{D}_{\Gamma }\left(b+x)).3.2Finite-valued QQ, resp. piece-wise constant Aℓ{A}_{\ell }The Pickands dependence function Aℓ{A}_{\ell }is piece-wise linear if and only if the associated random variable QQis discrete. We may approximate an arbitrary Aℓ{A}_{\ell }(from above due to convexity) by the function that takes the identical values on the grid i/ni\hspace{0.1em}\text{/}\hspace{0.1em}nfor n≥1n\ge 1and i=0,…,ni=0,\ldots ,nand which is piece-wise linear in between these grid points. The latter approximation corresponds to some finite-valued Qn{Q}_{n}. Since the corresponding approximation Aℓ,n{A}_{\ell ,n}converges point-wise to Aℓ{A}_{\ell }by construction, Qn{Q}_{n}converges in distribution to QQ, and also the associated copulas Cℓ,n{C}_{\ell ,n}approximate Cℓ{C}_{\ell }point-wise. From this perspective, the case of finite-valued QQis of utmost importance.Regarding the implementation, we recommend to apply the strategy proposed in [18], that is to work on the level of Aℓ′{A}_{\ell }^{^{\prime} }. To this end, we consider a Pickands dependence function Aℓ,n{A}_{\ell ,n}that is piece-wise constant between given values v0≔0<v1<…<vn≔1{v}_{0}:= 0\lt {v}_{1}\lt \ldots \lt {v}_{n}:= 1, with n≥1n\ge 1arbitrary but fixed. According to the methodology in [18], we parameterize this function in terms of the vectors v=(v0,…,vn){\boldsymbol{v}}=\left({v}_{0},\ldots ,{v}_{n})and a=(a1,…,an){\boldsymbol{a}}=\left({a}_{1},\ldots ,{a}_{n})subject to the conditions −1≤a1≤a2≤…≤an≤1,∑i=1nai(vi−vi−1)=0.-1\le {a}_{1}\le {a}_{2}\hspace{0.33em}\le \ldots \le {a}_{n}\le 1,\hspace{1.0em}\mathop{\sum }\limits_{i=1}^{n}{a}_{i}\left({v}_{i}-{v}_{i-1})=0.Intuitively, we define Aℓ,n{A}_{\ell ,n}piece-wise constant with Aℓ,n′(x)=ai{A}_{\ell ,n}^{^{\prime} }\left(x)={a}_{i}for vi−1≤x<vi{v}_{i-1}\le x\lt {v}_{i}, i=1,…,ni=1,\ldots ,n. With i(x)≔max{i∈{0,…,n}:vi≤x}i\left(x):= \max \left\{i\in \left\{0,\ldots ,n\right\}:{v}_{i}\le x\right\}, Aℓ,n{A}_{\ell ,n}is given by Aℓ,n(x)=1+(x−vi(x))ai(x)+1+∑j=1i(x)aj(vj−vj−1).{A}_{\ell ,n}\left(x)=1+(x-{v}_{i\left(x)}){a}_{i\left(x)+1}+\mathop{\sum }\limits_{j=1}^{i\left(x)}{a}_{j}\left({v}_{j}-{v}_{j-1}).With the convenient notation a0≔−1{a}_{0}:= -1, the associated random variable Qn{Q}_{n}has distribution function P(Qn≤q)=(1−ai(1−q))/2{\mathbb{P}}\left({Q}_{n}\le q)=\left(1-{a}_{i\left(1-q)})\hspace{0.1em}\text{/}\hspace{0.1em}2. The probability mass function is given by P(Qn=0)=1−an2,P(Qn=1)=1+a12,P(Qn=1−vi)=ai+1−ai2,i=1,…,n−1.{\mathbb{P}}\left({Q}_{n}=0)=\frac{1-{a}_{n}}{2},\hspace{1.0em}{\mathbb{P}}\left({Q}_{n}=1)=\frac{1+{a}_{1}}{2},\hspace{1em}{\mathbb{P}}({Q}_{n}=1-{v}_{i})=\frac{{a}_{i+1}-{a}_{i}}{2},\hspace{1.0em}i=1,\ldots ,n-1.An arbitrary Aℓ{A}_{\ell }may now be approximated by a piece-wise constant one of the above form Aℓ,n{A}_{\ell ,n}with v=(0,1/n,…,(n−1)/n,1){\boldsymbol{v}}=\left(0,1\hspace{0.1em}\text{/}\hspace{0.1em}n,\ldots ,\left(n-1)\hspace{0.1em}\text{/}\hspace{0.1em}n,1)by computing the parameters a1,…,an{a}_{1},\ldots ,{a}_{n}iteratively from the relation Aℓin=1+1n∑j=1iaj,i=1,…,n,{A}_{\ell }\left(\frac{i}{n}\right)=1+\frac{1}{n}\mathop{\sum }\limits_{j=1}^{i}{a}_{j},\hspace{1.0em}i=1,\ldots ,n,meaning that a1=nAℓ1n−1,ai=nAℓin−1−1n∑j=1i−1aj,i≥2.{a}_{1}=n\left[{A}_{\ell }\left(\frac{1}{n}\right)-1\right],\hspace{1.0em}{a}_{i}=n\left[{A}_{\ell }\left(\frac{i}{n}\right)-1-\frac{1}{n}\mathop{\sum }\limits_{j=1}^{i-1}{a}_{j}\right],\hspace{1.0em}i\ge 2.Since exact simulation of the finite-valued Qn{Q}_{n}is possible, like the evaluation of Aℓ,n{A}_{\ell ,n}and Aℓ,n′{A}_{\ell ,n}^{^{\prime} }, our algorithm is feasible. Figure 1 visualizes points simulated from Cˆℓ,n{\hat{C}}_{\ell ,n}when ℓ\ell is from the Hüsler-Reiss family in Example 3.1.Figure 1Scatter plot of 2,500 samples from Cˆℓ{\hat{C}}_{\ell }with ℓ\ell given by the Hüsler-Reiss family with parameter θ=1/4\theta =1\hspace{0.1em}\text{/}\hspace{0.1em}4(gray points), as well as from its approximation Cˆℓ,n{\hat{C}}_{\ell ,n}(black points) with n=10n=10(left plot) and n=80n=80(right plot).4The specific case of symmetric normsFrom the proof of Theorem 2.2 we recall the decomposition of the survival copula Cˆℓ{\hat{C}}_{\ell }in (5) of the Archimax copula Cℓ{C}_{\ell }into the parts on the ℓ\ell -simplex and inside the ℓ\ell -simplex. To wit, the distribution function Cˆℓ{\hat{C}}_{\ell }of the random vector (Xℓ,Yℓ)\left({X}_{\ell },{Y}_{\ell })in Theorem 2.2 admits the decomposition Cˆℓ=(1−pCˆℓ)Cˆℓc+pCˆℓCˆℓs,{\hat{C}}_{\ell }=\left(1-{p}_{{\hat{C}}_{\ell }}){\hat{C}}_{\ell }^{c}+{p}_{{\hat{C}}_{\ell }}{\hat{C}}_{\ell }^{s},where pCˆℓ=P(ℓ(Xℓ,Yℓ)=1){p}_{{\hat{C}}_{\ell }}={\mathbb{P}}(\ell \left({X}_{\ell },{Y}_{\ell })=1)denotes the probability mass on the ℓ\ell -simplex, Cˆℓs{\hat{C}}_{\ell }^{s}denotes the cdf of a singular distribution concentrated on the ℓ\ell -simplex and Cˆℓc{\hat{C}}_{\ell }^{c}denotes the cdf of a probability distribution concentrated inside the ℓ\ell -simplex. Note that Cˆℓc{\hat{C}}_{\ell }^{c}is only absolutely continuous, if the associated random variable QQfrom the Pickands representation (4) of ℓ\ell is absolutely continuous, such as assumed in the proof of Theorem 2.2. In the general case, it has singular parts concentrated on straight lines corresponding to atoms of QQ, as can be seen in Figure 1.In the special case of symmetric ℓ\ell , that is if ℓ(x,y)=ℓ(y,x)\ell \left(x,y)=\ell (y,x)for arbitrary x,yx,y, there is one phenomenon that we find interesting enough to discuss in the present context. The distribution function Cˆℓs{\hat{C}}_{\ell }^{s}equals the distribution function of a counter-monotonic random vector, meaning that Cˆℓs(x,y)=(G1,ℓ(x)+G2,ℓ(y)−1)+,x,y∈[0,1],{\hat{C}}_{\ell }^{s}\left(x,y)={({G}_{1,\ell }\left(x)+{G}_{2,\ell }(y)-1)}_{+},\hspace{1.0em}x,y\in \left[0,1],with marginal distribution functions G1,ℓ{G}_{1,\ell }and G2,ℓ{G}_{2,\ell }, respectively. A stochastic representation for (Xs,ℓ,Ys,ℓ)∼Cˆℓs\left({X}_{s,\ell },{Y}_{s,\ell })\hspace{0.33em} \sim \hspace{0.33em}{\hat{C}}_{\ell }^{s}is hence well-known to be given by (G1,ℓ−1(W),G2,ℓ−1(1−W))({G}_{1,\ell }^{-1}\left(W),{G}_{2,\ell }^{-1}\left(1-W))for W∼U[0,1]W\hspace{0.33em} \sim \hspace{0.33em}{\mathcal{U}}\left[0,1]. Since we know that this random vector is concentrated on the ℓ\ell -unit simplex, the norm ℓ\ell determines the implicit relationship between G1,ℓ{G}_{1,\ell }and G2,ℓ{G}_{2,\ell }via the equation ℓ(G1,ℓ−1(w),G2,ℓ−1(1−w))=1for allw∈[0,1],\ell ({G}_{1,\ell }^{-1}\left(w),{G}_{2,\ell }^{-1}\left(1-w))=1\hspace{1em}\hspace{0.1em}\text{for all}\hspace{0.1em}\hspace{0.33em}w\in \left[0,1],so that G1,ℓ{G}_{1,\ell }determines G2,ℓ{G}_{2,\ell }and vice versa. Now if ℓ\ell is symmetric, it follows from exchangeability of Cˆℓs{\hat{C}}_{\ell }^{s}that G1,ℓ=G2,ℓ≕Gℓ{G}_{1,\ell }={G}_{2,\ell }\hspace{0.33em}=: \hspace{0.33em}{G}_{\ell }. The distribution function Gℓ{G}_{\ell }is then uniquely and implicitly determined by ℓ\ell via the equation (12)ℓ(Gℓ−1(w),Gℓ−1(1−w))=1for allw∈[0,1].\ell ({G}_{\ell }^{-1}\left(w),{G}_{\ell }^{-1}\left(1-w))=1\hspace{0.33em}\hspace{0.1em}\text{for all}\hspace{0.1em}\hspace{0.33em}w\in \left[0,1].We furthermore see that ℓ(x,y)=1⇔∃w∈[0,1]:(x,y)=(Gℓ−1(w),Gℓ−1(1−w))⇔Gℓ(x)+Gℓ(y)=1,\ell \left(x,y)=1\iff \exists w\in \left[0,1]:\left(x,y)=({G}_{\ell }^{-1}\left(w),{G}_{\ell }^{-1}\left(1-w))\iff {G}_{\ell }\left(x)+{G}_{\ell }(y)=1,which provides an alternative way to describe the unit simplex of ℓ\ell in terms of the distribution function Gℓ{G}_{\ell }. Summarizing, we obtain a correspondence between symmetric ℓ\ell and convex and continuous distribution functions Gℓ{G}_{\ell }on [0,1]\left[0,1]. (An arbitrary distribution function on [0,1]\left[0,1]that is convex is always continuous on [0,1){[}0,1), but in general may have a jump at 1, which we exclude by saying that Gℓ{G}_{\ell }is continuous on all of [0,1]\left[0,1].)Lemma 4.1(Correspondence between symmetric ℓ\ell and Gℓ{G}_{\ell }) For a symmetric stable tail dependence function ℓ\ell the distribution function Gℓ{G}_{\ell }is continuous and convex on [0,1]\left[0,1]. Conversely, any continuous and convex distribution function on [0,1]\left[0,1]implicitly defines a symmetric stable tail dependence function ℓ=ℓG\ell ={\ell }_{G}, which is uniquely determined by its unit simplex {(x,y)∈[0,1]2:G(x)+G(y)=1}\left\{\left(x,y)\in {\left[0,1]}^{2}:G\left(x)+G(y)=1\right\}.ProofContinuity of Gℓ{G}_{\ell }is immediate, since a jump would imply a non-empty interval (a,b)⊂(0,1)\left(a,b)\subset \left(0,1)on which Gℓ−1{G}_{\ell }^{-1}was constant, contradicting the fact that w↦(Gℓ−1(w),Gℓ−1(1−w))w\mapsto ({G}_{\ell }^{-1}\left(w),{G}_{\ell }^{-1}\left(1-w))is the closed curve that determines the ℓ\ell -unit simplex. To verify convexity, assume the converse, that is assume Gℓ{G}_{\ell }is not convex. Then we necessarily find two distinct x,y∈[0,1]x,y\in \left[0,1]and α∈(0,1)\alpha \in \left(0,1)such that Gℓ(αx+(1−α)y)>αGℓ(x)+(1−α)Gℓ(y).{G}_{\ell }\left(\alpha x+\left(1-\alpha )y)\gt \alpha {G}_{\ell }\left(x)+\left(1-\alpha ){G}_{\ell }(y).By continuity of Gℓ{G}_{\ell }, we can always choose xxand yysuch that the last inequality holds for all α∈(0,1)\alpha \in \left(0,1)jointly. In particular, for fixed α∈(0,1)\alpha \in \left(0,1)we also have this inequality for 1−α1-\alpha , that is Gℓ(αy+(1−α)x)>αGℓ(y)+(1−α)Gℓ(x).{G}_{\ell }\left(\alpha y+\left(1-\alpha )x)\gt \alpha {G}_{\ell }(y)+\left(1-\alpha ){G}_{\ell }\left(x).But this implies Gℓ(x)+Gℓ(y)=αGℓ(x)+(1−α)Gℓ(y)+αGℓ(y)+(1−α)Gℓ(x)<Gℓ(αx+(1−α)y)+Gℓ(αy+(1−α)x).{G}_{\ell }\left(x)+{G}_{\ell }(y)=\alpha {G}_{\ell }\left(x)+\left(1-\alpha ){G}_{\ell }(y)+\alpha {G}_{\ell }(y)+\left(1-\alpha ){G}_{\ell }\left(x)\lt {G}_{\ell }\left(\alpha x+\left(1-\alpha )y)+{G}_{\ell }\left(\alpha y+\left(1-\alpha )x).This implies that ℓ(x,y)<ℓ(αx+(1−α)y,αy+(1−α)x),\ell \left(x,y)\lt \ell (\alpha x+\left(1-\alpha )y,\alpha y+\left(1-\alpha )x),or equivalently in terms of the convex function Aℓ{A}_{\ell }that Aℓxx+y<Aℓαx+(1−α)yx+y≤αAℓxx+y+(1−α)Aℓyx+y,{A}_{\ell }\left(\frac{x}{x+y}\right)\lt {A}_{\ell }\left(\frac{\alpha x+\left(1-\alpha )y}{x+y}\right)\le \alpha {A}_{\ell }\left(\frac{x}{x+y}\right)+\left(1-\alpha ){A}_{\ell }\left(\frac{y}{x+y}\right),for all α∈(0,1)\alpha \in \left(0,1). Reversing the roles of xxand yyin this derivation, we hence obtain that maxAℓxx+y,Aℓyx+y<αAℓxx+y+(1−α)Aℓyx+y,\max \left\{{A}_{\ell }\left(\frac{x}{x+y}\right),{A}_{\ell }\left(\frac{y}{x+y}\right)\right\}\lt \alpha {A}_{\ell }\left(\frac{x}{x+y}\right)+\left(1-\alpha ){A}_{\ell }\left(\frac{y}{x+y}\right),for all α∈(0,1)\alpha \in \left(0,1), contradicting convexity of Aℓ{A}_{\ell }.Conversely, let GGbe a convex and continuous distribution function on [0,1]\left[0,1]. Since norms in R2{{\mathbb{R}}}^{2}are uniquely determined by their unit balls, which are in one-to-one correspondence with closed convex sets containing the origin in their interior, all we need to prove is that the closed set BG≔{(x,y)∈R2:G(x)+G(y)≤1}{B}_{G}:= \{\left(x,y)\in {{\mathbb{R}}}^{2}:G\left(x)+G(y)\le 1\}is convex, which follows immediately from the convexity of GG, as can readily be verified by the readers themselves.□As a consequence, we can improve our simulation algorithm based on the stochastic representation (7) for symmetric ℓ\ell under the following (strong) assumption that is sometimes satisfied:Assumption 5(Strong additional assumption for symmetric ℓ\ell ) We can evaluate exactly the unique implicit function Gℓ−1{G}_{\ell }^{-1}that satisfies (12).Having the possibility to evaluate Gℓ−1{G}_{\ell }^{-1}explicitly, the stochastic representation (7) in the exchangeable case of symmetric ℓ\ell can be replaced with a simpler expression.Lemma 4.2(Simpler sampling algorithm for symmetric ℓ\ell ) We consider the same setting as in Theorem 2.2 with the additional assumption that ℓ(x,y)=ℓ(y,x)\ell \left(x,y)=\ell (y,x)for all x,y≥0x,y\ge 0, and with Assumption 5 satisfied. We further assume that W∼U[0,1]W\hspace{0.33em} \sim \hspace{0.33em}{\mathcal{U}}\left[0,1]is independent of V,QV,Q, which are the same as in Theorem 2.2. Then a random vector (Xℓ,Yℓ)∼Cˆℓ\left({X}_{\ell },{Y}_{\ell })\hspace{0.33em} \sim \hspace{0.33em}{\hat{C}}_{\ell }admits the stochastic representation(Xℓ,Yℓ)∼(Gℓ−1(W),Gℓ−1(1−W)),ifℓV2Q,V2(1−Q)≥1V2Q,V2(1−Q),ifℓV2Q,V2(1−Q)<1.\left({X}_{\ell },{Y}_{\ell })\hspace{0.33em} \sim \hspace{0.33em}\left\{\begin{array}{ll}({G}_{\ell }^{-1}\left(W),{G}_{\ell }^{-1}\left(1-W)),& {if}\hspace{0.33em}\ell \left(\frac{V}{2Q},\frac{V}{2\left(1-Q)}\right)\ge 1\\ \left(\frac{V}{2Q},\frac{V}{2\left(1-Q)}\right),& {if}\hspace{0.33em}\ell \left(\frac{V}{2Q},\frac{V}{2\left(1-Q)}\right)\lt 1.\\ \end{array}\right.ProofThe statement follows from the text preceding this lemma.□We provide two examples for situations in which the implicit definition of Gℓ{G}_{\ell }is simple to make explicit, hence Assumption 5 is met.Example 4.3(The case ℓ=‖.‖p\ell ={\Vert .\Vert }_{p}) Obviously, Gℓ−1(w)=w1/p{G}_{\ell }^{-1}\left(w)={w}^{1\text{/}p}does the job. Note that this is well in line with known results about ‖.‖p{\Vert .\Vert }_{p}-symmetric survival functions in [15].Example 4.4(The case of QQwith just two atoms) If we consider for q∈[0,1/2]q\in \left[0,1\hspace{0.1em}\text{/}\hspace{0.1em}2]the norm ℓ(x,y)=max{qx,(1−q)y}+max{(1−q)x,qy},\ell \left(x,y)=\max \left\{qx,\left(1-q)y\right\}+\max \left\{\left(1-q)x,qy\right\},it is not difficult to verify that Gℓ−1(w)=(q+(1−2q)w)/(1−q){G}_{\ell }^{-1}\left(w)=\left(q+\left(1-2q)w)\hspace{0.1em}\text{/}\hspace{0.1em}\left(1-q)does the job. Note that these stable tail dependence functions form the extremal boundary of the simplex of all symmetric bivariate stable tail dependence functions, see [14]. The probability mass of Cˆℓ{\hat{C}}_{\ell }in this case is fully concentrated on the boundary of the triangle with edges (0,0)\left(0,0), (q/(1−q),1)\left(q\hspace{0.1em}\text{/}\hspace{0.1em}\left(1-q),1)and (0,q/(1−q))\left(0,q\hspace{0.1em}\text{/}\hspace{0.1em}\left(1-q)).As a final remark, we point out that our simulation algorithm can be utilized to simulate a random variable S∼GℓS\hspace{0.33em} \sim \hspace{0.33em}{G}_{\ell }under Assumptions 1–3 on the norm ℓ\ell only, that is, even without having Gℓ{G}_{\ell }or even Gℓ−1{G}_{\ell }^{-1}in explicit form. To this end, we perform the following steps, again with QQdenoting the random variable associated with ℓ\ell according to the Pickands representation (4): Until ℓV2Q,V2(1−Q)>1\ell \left(\frac{V}{2Q},\frac{V}{2\left(1-Q)}\right)\gt 1, repeatedly simulate independent instances of VVand QQ.With QQfrom the previous step, run Algorithm 1 in order to simulate XhQ{X}_{{h}_{Q}}.Return S=XhQ/Aℓ(XhQ)∼GℓS={X}_{{h}_{Q}}\hspace{0.1em}\text{/}\hspace{0.1em}{A}_{\ell }({X}_{{h}_{Q}})\hspace{0.33em} \sim \hspace{0.33em}{G}_{\ell }.In probabilistic terms, S∼GℓS\hspace{0.33em} \sim \hspace{0.33em}{G}_{\ell }has the same distribution as XhQ˜/Aℓ(XhQ˜){X}_{{h}_{\tilde{Q}}}\hspace{0.1em}\text{/}\hspace{0.1em}{A}_{\ell }({X}_{{h}_{\tilde{Q}}}), where XhQ˜∼hQ˜{X}_{{h}_{\tilde{Q}}}\hspace{0.33em} \sim \hspace{0.33em}{h}_{\tilde{Q}}and Q˜\tilde{Q}has distribution given by P(Q˜∈dq)=cqpCˆℓP(Q∈dq).{\mathbb{P}}\left(\tilde{Q}\in {\rm{d}}q)=\frac{{c}_{q}}{{p}_{{\hat{C}}_{\ell }}}{\mathbb{P}}\left(Q\in {\rm{d}}q).5Bivariate reciprocal Archimax copulasThe presented stochastic model for Archimax copulas can be leveraged to obtain an exact simulation algorithm for bivariate random vectors that are max-infinitely divisible and whose exponent measure has ℓ\ell -symmetric survival function. Their associated copulas take the form (2), so that it is plausible to call them reciprocal Archimax copulas, generalizing the terminology of [7] from the special case ℓ=‖.‖1\ell ={\Vert .\Vert }_{1}to general ℓ\ell . We briefly explain this, the logic being identical to the logic outlined in [15, Section 5] for the special case ℓ=‖.‖p\ell ={\Vert .\Vert }_{p}. Concretely, a measure μ\mu on E≔[0,∞]2⧹{(0,0)}E:= {\left[0,\infty ]}^{2}\setminus \left\{\left(0,0)\right\}with μ(E⧹[0,x]×[0,y])<∞∀x,y>0,limx,y→∞μ(E⧹[0,x]×[0,y])=0,\mu \left(E\setminus \left[0,x]\times \left[0,y])\lt \infty \hspace{1.0em}\forall x,y\gt 0,\hspace{1.0em}\mathop{\mathrm{lim}}\limits_{x,y\to \infty }\mu \left(E\setminus \left[0,x]\times \left[0,y])=0,is called an exponent measure. We say that μ\mu has ℓ\ell -symmetric survival function, if the survival function of μ\mu depends on its argument (x,y)\left(x,y)only through its norm ℓ(x,y)\ell \left(x,y), i.e., μ((x,∞]×(y,∞])=ψ(ℓ(x,y))\mu \left((x,\infty ]\times (y,\infty ])=\psi \left(\ell \left(x,y))for some function ψ\psi in one variable. Feasible functions are well-known to comprise such ψ:(0,∞)→[0,∞)\psi :\left(0,\infty )\to {[}0,\infty )that are convex with limx↘0ψ(x)=∞{\mathrm{lim}}_{x\searrow 0}\psi \left(x)=\infty and limx→∞ψ(x)=0{\mathrm{lim}}_{x\to \infty }\psi \left(x)=0, and this is the case if and only if there exists a non-finite Radon measure νψ{\nu }_{\psi }on (0,∞](0,\infty ]with νψ({∞})=0{\nu }_{\psi }\left(\left\{\infty \right\})=0and ψ(x)=∫x∞1−xrνψ(dr).\psi \left(x)=\underset{x}{\overset{\infty }{\int }}\left(1-\frac{x}{r}\right){\nu }_{\psi }\left({\rm{d}}r).Given a pair (ℓ,ψ)\left(\ell ,\psi )of a stable tail dependence function and such convex ψ\psi , defining an exponent measure μ\mu with ℓ\ell -norm symmetric survival function, the random vector (X∗,Y∗)\left({X}_{\ast },{Y}_{\ast })on (0,∞)2{\left(0,\infty )}^{2}with distribution function P(X∗≤x,Y∗≤y)=e−μ(E⧹[0,x]×[0,y])=e−ψ(x)−ψ(y)+ψ(ℓ(x,y)),x,y≥0,{\mathbb{P}}\left({X}_{\ast }\le x,{Y}_{\ast }\le y)={e}^{-\mu \left(E\setminus \left[0,x]\times \left[0,y])}={e}^{-\psi \left(x)-\psi (y)+\psi (\ell \left(x,y))},\hspace{1.0em}x,y\ge 0,is said to be max-infinitely divisible with exponent measure μ\mu . Note that X∗{X}_{\ast }and Y∗{Y}_{\ast }both have the identical marginal distribution function x↦exp{−ψ(x)}x\mapsto \exp \left\{-\psi \left(x)\right\}, so their associated copula is computed to be C∗,ψ,ℓ(x,y)=xyexp{ψ[ℓ(ψ−1{−log(x)},ψ−1{−log(y)})]}=xyCe−ψ,ℓ(x,y).{C}_{\ast ,\psi ,\ell }\left(x,y)=xy\exp \{\psi {[}\ell ({\psi }^{-1}\left\{-\log \left(x)\right\},{\psi }^{-1}\left\{-\log (y)\right\})]\}=\frac{x\hspace{0.33em}y}{{C}_{{e}^{-\psi },\ell }\left(x,y)}.An exact simulation algorithm for C∗,ψ,ℓ{C}_{\ast ,\psi ,\ell }requires exact simulation of (Xℓ,Yℓ)\left({X}_{\ell },{Y}_{\ell }), which is feasible under Assumptions 1–3, as well as the following assumption on ψ\psi , respectively, its representing measure νψ{\nu }_{\psi }:Assumption 6(Additional requirement for reciprocal Archimax copulas) The function ψ\psi and the inverse Gνψ−1{G}_{{\nu }_{\psi }}^{-1}of the survival function Gνψ(x)≔νψ((x,∞]){G}_{{\nu }_{\psi }}\left(x):= {\nu }_{\psi }\left((x,\infty ])can be evaluated exactly.Algorithm 2 provides the pseudo-code for simulation of a sample (U,V)∼C∗,ψ,ℓ\left(U,V)\hspace{0.33em} \sim \hspace{0.33em}{C}_{\ast ,\psi ,\ell }under Assumptions 1, 2, 3, and 6, the proof idea being an immediate generalization of [15, Section 5] from ‖.‖p{\Vert .\Vert }_{p}to general ℓ\ell that the readers can verify themselves. Note that the occurring while-loop certainly ends, since η=Gνψ−1(T)\eta ={G}_{{\nu }_{\psi }}^{-1}\left(T)decreases to zero due to the non-finiteness of νψ{\nu }_{\psi }. In Algorithm 2, the sub-routine SimulateExp returns an exponential random variable with unit mean.Figure 2 depicts scatter plots for two examples of copulas of the form C∗,ψ,ℓ{C}_{\ast ,\psi ,\ell }. In both plots, the function ψ=ψa\psi ={\psi }_{a}with a parameter a>0a\gt 0is chosen to be (13)ψa(x)=a1x1−x1x+12,x>0,{\psi }_{a}\left(x)=a&#x230A;\frac{1}{x}&#x230B;\left(1-\frac{x\left(&#x230A;\frac{1}{x}&#x230B;+1\right)}{2}\right),\hspace{1.0em}x\gt 0,Algorithm 2. Simulation of (U,V)∼C∗,ψ,ℓ\left(U,V)\hspace{0.33em} \sim \hspace{0.33em}{C}_{\ast ,\psi ,\ell }1:procedure Simulate  C∗(ℓ,ψ){C}_{\ast }\left(\ell ,\psi )2:(X∗,Y∗)←(0,0)\hspace{1.0em}\left({X}_{\ast },{Y}_{\ast })\leftarrow \left(0,0)3:T←\hspace{1.0em}T\leftarrow  SimulateExp4:η←Gνψ−1(T)\hspace{1.0em}\eta \leftarrow {G}_{{\nu }_{\psi }}^{-1}\left(T)5:\hspace{1.0em}while η>min{X∗,Y∗}\eta \gt \min \left\{{X}_{\ast },{Y}_{\ast }\right\}6:\hspace{2.0em}Simulate (Xℓ,Yℓ)∼Cˆℓ\left({X}_{\ell },{Y}_{\ell })\hspace{0.33em} \sim \hspace{0.33em}{\hat{C}}_{\ell }according to (7)7:X∗←max{X∗,ηXℓ}\hspace{2.0em}{X}_{\ast }\leftarrow \max \{{X}_{\ast },\eta \hspace{0.33em}{X}_{\ell }\}8:Y∗←max{Y∗,ηYℓ}\hspace{2.0em}{Y}_{\ast }\leftarrow \max \{{Y}_{\ast },\eta \hspace{0.33em}{Y}_{\ell }\}9:T←T+\hspace{2.0em}T\leftarrow T+ SimulateExp10:η←Gνψ−1(T)\hspace{2.0em}\eta \leftarrow {G}_{{\nu }_{\psi }}^{-1}\left(T)11:\hspace{1.0em}end while12:\hspace{1.0em}return (U,V)=(e−ψ(X∗),e−ψ(Y∗))\left(U,V)=({e}^{-\psi \left({X}_{\ast })},{e}^{-\psi \left({Y}_{\ast })})13:end procedurecorresponding to a discrete measure νψ{\nu }_{\psi }whose required inverse of the survival function equals Gνψ−1(x)=1/⌈x/a⌉{G}_{{\nu }_{\psi }}^{-1}\left(x)=1\hspace{-0.08em}\text{/}\hspace{-0.08em}\lceil x\hspace{-0.08em}\text{/}\hspace{-0.08em}a\rceil , see [15, Example 3].Figure 2Scatter plot of 25,000 samples from C∗,ψ,ℓ{C}_{\ast ,\psi ,\ell }with ψa{\psi }_{a}given in (13) for a=1.5a=1.5and two different ℓ\ell (left: ℓ\ell from Example 4.4 with q=0.45q=0.45, right: ℓ\ell piece-wise constant corresponding to some (asymmetric) QQwith six atoms and P(Q=0),P(Q=1)>0{\mathbb{P}}\left(Q=0),{\mathbb{P}}\left(Q=1)\gt 0, i.e., full support).Finally, we collect some immediate properties of C∗,ψ,ℓ{C}_{\ast ,\psi ,\ell }. Whereas traditional concordance measures like Spearman’s Rho or Kendall’s Tau appear to be difficult to compute in convenient form for C∗,ψ,ℓ{C}_{\ast ,\psi ,\ell }, the following properties are straightforward to observe: Support of C∗,ψ,ℓ{C}_{\ast ,\psi ,\ell }: The left and right end points of the support of QQhave an influence on the support of C∗,ψ,ℓ{C}_{\ast ,\psi ,\ell }. Assume that QQhas support inside the interval [bQ,uQ]\left[{b}_{Q},{u}_{Q}]for either bQ>0{b}_{Q}\gt 0and/or uQ<1{u}_{Q}\lt 1. It follows that Aℓ(x)=1−x{A}_{\ell }\left(x)=1-xon [0,bQ]\left[0,{b}_{Q}]and/or Aℓ(x)=x{A}_{\ell }\left(x)=xon [uQ,1]\left[{u}_{Q},1]. A simple computation shows that this implies C∗,ψ,ℓ(x,y)=x,ifx≤exp−ψbQ1−bQψ−1[−log(y)]y,ify≤exp−ψ1−uQuQψ−1[−log(x)],{C}_{\ast ,\psi ,\ell }\left(x,y)=\left\{\begin{array}{ll}x,& \hspace{0.1em}\text{if}\hspace{0.1em}\hspace{0.33em}x\le \exp \left\{-\psi \left(\frac{{b}_{Q}}{1-{b}_{Q}}{\psi }^{-1}\left[-\log (y)]\right)\right\}\\ y,& \hspace{0.1em}\text{if}\hspace{0.1em}\hspace{0.33em}y\le \exp \left\{-\psi \left(\frac{1-{u}_{Q}}{{u}_{Q}}{\psi }^{-1}\left[-\log \left(x)]\right)\right\}\end{array}\right.,so that the copula assigns no mass to these two areas, since the second derivative vanishes. Consequently, if (x,y)\left(x,y)is in the support of C∗,ψ,ℓ{C}_{\ast ,\psi ,\ell }, then necessarily exp−ψ1−uQuQψ−1[−log(x)]≤y≤exp−ψ1−bQbQψ−1[−log(x)].\exp \left\{-\psi \left(\frac{1-{u}_{Q}}{{u}_{Q}}{\psi }^{-1}\left[-\log \left(x)]\right)\right\}\le y\le \exp \left\{-\psi \left(\frac{1-{b}_{Q}}{{b}_{Q}}{\psi }^{-1}\left[-\log \left(x)]\right)\right\}.Note that the lower (upper) bound becomes trivially equal to 0 (equal to 1) if uQ=1{u}_{Q}=1(bQ=0{b}_{Q}=0). The left plot in Figure 2 depicts an example with support truly inside the unit square [0,1]2{\left[0,1]}^{2}, since bQ=0.45=1−uQ{b}_{Q}=0.45=1-{u}_{Q}in this case.Tail dependence: Just like in case of bivariate Archimax copulas, the coefficients of upper- and lower-tail dependence depend on regular variation properties of the generator ψ\psi as well as on the value ℓ(1,1)∈[1,2]\ell \left(1,1)\in \left[1,2], and both lower- and upper-tail dependence can be present or not. To this end, we recall that a real function f:R→Rf:{\mathbb{R}}\to {\mathbb{R}}is called regularly varying at some point ∗∈[−∞,∞]\ast \in \left[-\infty ,\infty ]with index −ρ-\rho for ρ∈(0,∞)\rho \in \left(0,\infty )if limt→∗f(xt)/f(t)=x−ρ{\mathrm{lim}}_{t\to \ast }f\left(xt)\hspace{0.1em}\text{/}\hspace{0.1em}f\left(t)={x}^{-\rho }for each x∈Rx\in {\mathbb{R}}, this property being denoted f∈ℛ−ρ∗f\in {{\mathcal{ {\mathcal R} }}}_{-\rho }^{\ast }. With this terminology, we immediately obtain eψ∈ℛ−ρ0forρ∈(0,∞)⇒limx↘0C∗,ψ,ℓ(x,x)x=ℓ(1,1)−ρ.{e}^{\psi }\in {{\mathcal{ {\mathcal R} }}}_{-\rho }^{0}\hspace{1em}\hspace{0.1em}\text{for}\hspace{0.1em}\hspace{0.33em}\rho \in \left(0,\infty )\hspace{0.33em}\Rightarrow \hspace{0.33em}\mathop{\mathrm{lim}}\limits_{x\searrow 0}\frac{{C}_{\ast ,\psi ,\ell }\left(x,x)}{x}=\ell {\left(1,1)}^{-\rho }.The limit is called lower-tail dependence coefficient and measures the degree of accumulation of mass in the lower-left corner of [0,1]2{\left[0,1]}^{2}. A concrete example is given by the generator ψ(x)=−log(1−exp{−xθ})\psi \left(x)=-\log (1-\exp \left\{-{x}^{\theta }\right\})for some θ≥1\theta \ge 1, for which we observe exp(−ψ)∈ℛ−θ0\exp \left(-\psi )\in {{\mathcal{ {\mathcal R} }}}_{-\theta }^{0}.Similarly, if ψ′{\psi }^{^{\prime} }exists on an interval (T,∞)\left(T,\infty )for some T≥0T\ge 0, we obtain ψ′∈ℛ−ρ∞forρ∈(1,∞)⇒limx↗1C∗,ψ,ℓ(x,x)−2x+11−x=ℓ(1,1)1−ρ.{\psi }^{^{\prime} }\in {{\mathcal{ {\mathcal R} }}}_{-\rho }^{\infty }\hspace{1em}\hspace{0.1em}\text{for}\hspace{0.1em}\hspace{0.33em}\rho \in \left(1,\infty )\hspace{0.33em}\Rightarrow \hspace{0.33em}\mathop{\mathrm{lim}}\limits_{x\nearrow 1}\frac{{C}_{\ast ,\psi ,\ell }\left(x,x)-2x+1}{1-x}=\ell {\left(1,1)}^{1-\rho }.The limit is called upper-tail dependence coefficient and measures the degree of accumulation of mass in the upper-right corner of [0,1]2{\left[0,1]}^{2}. A concrete example is given by the generator ψ(x)=x−θ\psi \left(x)={x}^{-\theta }with θ>0\theta \gt 0, for which we observe ψ′∈ℛ−(1+θ)∞{\psi }^{^{\prime} }\in {{\mathcal{ {\mathcal R} }}}_{-\left(1+\theta )}^{\infty }.6Conclusion and outlookWe have derived a novel simulation algorithm for bivariate Archimax copulas, demonstrating its feasibility by many examples. Furthermore, the algorithm was used to obtain an exact simulation algorithm also for the family of bivariate reciprocal Archimax copulas. A generalization of the algorithm to the multi-dimensional case d≥3d\ge 3remains an interesting open problem, which to date is only solved for the particular case ℓ=‖.‖p\ell ={\Vert .\Vert }_{p}, see [15].

Journal

Dependence Modelingde Gruyter

Published: Jan 1, 2022

Keywords: Archimax copula; extreme-value copula; Archimedean copula; max-infinitely divisible; simulation algorithm; 65C05; 60E05; 62H05

There are no references for this article.