Get 20M+ Full-Text Papers For Less Than $1.50/day. Start a 14-Day Trial for You or Your Team.

Learn More →

A combinatorial proof of the Gaussian product inequality beyond the MTP2 case

A combinatorial proof of the Gaussian product inequality beyond the MTP2 case 1IntroductionThe Gaussian product inequality (GPI) is a long-standing conjecture, which states that for any centered Gaussian random vector X=(X1,…,Xd){\boldsymbol{X}}=\left({X}_{1},\ldots ,{X}_{d})of dimension d∈N={1,2,…}d\in {\mathbb{N}}=\left\{1,2,\ldots \right\}and every integer m∈Nm\in {\mathbb{N}}, one has (1)E∏i=1dXi2m≥∏i=1dE(Xi2m).{E}\left(\mathop{\prod }\limits_{i=1}^{d}{X}_{i}^{2m}\right)\ge \mathop{\prod }\limits_{i=1}^{d}{E}({X}_{i}^{2m}).This inequality is known to imply the real polarization problem conjecture in functional analysis [14], and it is related to the so-called UU-conjecture to the effect that if PPand QQare two nonconstant polynomials on Rd{{\mathbb{R}}}^{d}such that the random variables P(X)P\left({\boldsymbol{X}})and Q(X)Q\left({\boldsymbol{X}})are independent, then there exist an orthogonal transformation LLon Rd{{\mathbb{R}}}^{d}and an integer k∈{1,…,d−1}k\in \left\{1,\ldots ,d-1\right\}such that P∘LP\circ Lis a function of (X1,…,Xk)\left({X}_{1},\ldots ,{X}_{k})and Q∘LQ\circ Lis a function of (Xk+1,…,Xd)\left({X}_{k+1},\ldots ,{X}_{d}); see, e.g., [7,14] and the references therein.Inequality (1) is well known to be true when m=1m=1; see, e.g., Frenkel [5]. Karlin and Rinott [8] also showed that it holds when the random vector ∣X∣=(∣X1∣,…,∣Xd∣)| {\boldsymbol{X}}| =\left(| {X}_{1}| ,\ldots ,| {X}_{d}| )has a multivariate totally positive density of order 2, denoted MTP2{\text{MTP}}_{2}. As stated in Remark 1.4 of their paper, the latter condition is verified, among others, in dimension d=2d=2for all nonsingular Gaussian random pairs.Interest in the problem has recently gained traction when Lan et al. [11] established the inequality in dimension d=3d=3. Hope that the result might be true in general is also fueled by the fact, established by Wei [25], that for any reals α1,…,αd∈(−1/2,0){\alpha }_{1},\ldots ,{\alpha }_{d}\in \left(-1\hspace{0.1em}\text{/}\hspace{0.1em}2,0), one has (2)E∏i=1d∣Xi∣2αi≥∏i=1dE(∣Xi∣2αi).{E}\left(\mathop{\prod }\limits_{i=1}^{d}| {X}_{i}{| }^{2{\alpha }_{i}}\right)\ge \mathop{\prod }\limits_{i=1}^{d}{E}(| {X}_{i}{| }^{2{\alpha }_{i}}).Li and Wei [13] have further conjectured that the latter inequality holds for all reals α1,…,αd∈[0,∞){\alpha }_{1},\ldots ,{\alpha }_{d}\in {[}0,\infty )and any centered Gaussian random vector X{\boldsymbol{X}}.The purpose of this paper is to report a combinatorial proof of inequality (2) in the special case where the reals α1,…,αd{\alpha }_{1},\ldots ,{\alpha }_{d}are nonnegative integers and when each of the components X1,…,Xd{X}_{1},\ldots ,{X}_{d}of the centered Gaussian random vector X{\boldsymbol{X}}can be written as a linear combination, with coefficients of identical sign, of the components of a standard Gaussian random vector. A precise statement of this assumption is given as Condition (III) in Section 2, and the proof of the main result, Proposition 2, appears in Section 3. It is then shown in Section 4, see Proposition 3, that this condition is strictly weaker than the assumption that the random vector ∣X∣| {\boldsymbol{X}}| is MTP2{\text{MTP}}_{2}.Coincidentally, shortly after the first version of the present paper was posted on arXiv, inequality (2) for all nonnegative integers α1,…,αd∈N0={0,1,…}{\alpha }_{1},\ldots ,{\alpha }_{d}\in {{\mathbb{N}}}_{0}=\left\{0,1,\ldots \right\}was established under an even weaker assumption, stated as Condition (IV) in Section 2. The latter condition states that up to a change of sign, the components of the Gaussian random vector X{\boldsymbol{X}}are all nonnegatively correlated; see Lemma 2.3 of Russell and Sun [22]. Therefore, the present paper’s main contribution resides in the method of proof using a combinatorial argument closely related to the complete monotonicity of multinomial probabilities previously shown by Ouimet [16] and Qi et al. [17].All background materials required to understand the contribution and put it in perspective are provided in Section 2. The statements and proofs of the paper’s results are then presented in Sections 3 and 4. This article concludes with a brief discussion in Section 5. For completeness, a technical lemma due to Ouimet [16], which is used in the proof of Proposition 2, is included in the Appendix.2BackgroundFirst, recall the definition of multivariate total positivity of order 2 (MTP2{\text{MTP}}_{2}) on a set S⊆Rd{\mathcal{S}}\subseteq {{\mathbb{R}}}^{d}.Definition 1A density f:Rd→[0,∞)f:{{\mathbb{R}}}^{d}\to {[}0,\infty )supported on S{\mathcal{S}}is said to be multivariate totally positive of order 2, denoted MTP2{\text{MTP}}_{2}, if and only if, for all vectors x=(x1,…,xd),y=(y1,…,yd)∈S{\boldsymbol{x}}=\left({x}_{1},\ldots ,{x}_{d}),{\boldsymbol{y}}=({y}_{1},\ldots ,{y}_{d})\in {\mathcal{S}}, one has f(x∨y)f(x∧y)≥f(x)f(y),f\left({\boldsymbol{x}}\vee {\boldsymbol{y}})f\left({\boldsymbol{x}}\wedge {\boldsymbol{y}})\ge f\left({\boldsymbol{x}})f({\boldsymbol{y}}),where x∨y=(max(x1,y1),…,max(xd,yd)){\boldsymbol{x}}\vee {\boldsymbol{y}}=\left(\max \left({x}_{1},{y}_{1}),\ldots ,\max \left({x}_{d},{y}_{d}))and x∧y=(min(x1,y1),…,min(xd,yd)){\boldsymbol{x}}\wedge {\boldsymbol{y}}=\left(\min \left({x}_{1},{y}_{1}),\ldots ,\min \left({x}_{d},{y}_{d})).Densities in this class have many interesting properties, including the following result, which corresponds to equation (1.7) of Karlin and Rinott [8].Proposition 1Let Y{\boldsymbol{Y}}be an MTP2{\text{MTP}}_{2}random vector on S{\mathcal{S}}, and let φ1,…,φr{\varphi }_{1},\ldots ,{\varphi }_{r}be a collection of nonnegative and (component-wise) nondecreasing functions on S{\mathcal{S}}. ThenE∏i=1rφi(Y)≥∏i=1rE{φi(Y)}.{E}\left\{\mathop{\prod }\limits_{i=1}^{r}{\varphi }_{i}\left({\boldsymbol{Y}})\right\}\ge \mathop{\prod }\limits_{i=1}^{r}{E}\{{\varphi }_{i}\left({\boldsymbol{Y}})\}.In particular, let X=(X1,…,Xd){\boldsymbol{X}}=\left({X}_{1},\ldots ,{X}_{d})be a dd-variate Gaussian random vector with zero mean and nonsingular covariance matrix var(X){\rm{var}}\left({\boldsymbol{X}}). Suppose that the following condition holds.(I)The random vector ∣X∣=(∣X1∣,…,∣Xd∣)| {\boldsymbol{X}}| =\left(| {X}_{1}| ,\ldots ,| {X}_{d}| )belongs to the MTP2{\text{MTP}}_{2}class on [0,∞)d{{[}0,\infty )}^{d}.Under Condition (I), the validity of the GPI conjecture (2) for all reals α1,…,αd∈[0,∞){\alpha }_{1},\ldots ,{\alpha }_{d}\in {[}0,\infty )follows from Proposition 1 with r=dr=dand maps φ1,…,φd{\varphi }_{1},\ldots ,{\varphi }_{d}defined, for every vector y=(y1,…,yd)∈[0,∞)d{\boldsymbol{y}}=({y}_{1},\ldots ,{y}_{d})\in {{[}0,\infty )}^{d}and integer i∈{1,…,d}i\in \left\{1,\ldots ,d\right\}, by φi(y)=yi2αi.{\varphi }_{i}({\boldsymbol{y}})={y}_{i}^{2{\alpha }_{i}}.When X=(X1,…,Xd){\boldsymbol{X}}=\left({X}_{1},\ldots ,{X}_{d})is a centered Gaussian random vector with covariance matrix var(X){\rm{var}}\left({\boldsymbol{X}}), Theorem 3.1 of Karlin and Rinott [8] finds an equivalence between Condition (I) and the requirement that the off-diagonal elements of the inverse of var(X){\rm{var}}\left({\boldsymbol{X}})are all nonpositive up to a change of sign for some of the components of X{\boldsymbol{X}}. The latter condition can be stated more precisely as follows using the notion of signature matrix, which refers to a diagonal matrix whose diagonal elements are ±1\pm 1.(II)There exists a d×dd\times dsignature matrix DDsuch that the covariance matrix var(DX)−1{\rm{var}}{\left(D{\boldsymbol{X}})}^{-1}only has nonpositive off-diagonal elements.Two other conditions of interest on the structure of the random vector X{\boldsymbol{X}}are as follows. (III)There exist a d×dd\times dsignature matrix DDand a d×dd\times dmatrix CCwith entries in [0,∞){[}0,\infty )such that the random vector DXD{\boldsymbol{X}}has the same distribution as the random vector CZC{\boldsymbol{Z}}, where Z∼Nd(0d,Id){\boldsymbol{Z}}\hspace{0.33em} \sim \hspace{0.33em}{{\mathcal{N}}}_{d}\left({{\bf{0}}}_{d},{{\boldsymbol{I}}}_{d})is a d×1d\times 1Gaussian random vector with zero mean vector 0d{{\bf{0}}}_{d}and identity covariance matrix Id{{\boldsymbol{I}}}_{d}.(IV)There exists a d×dd\times dsignature matrix DDsuch that the covariance matrix var(DX){\rm{var}}\left(D{\boldsymbol{X}})has only nonnegative elements.Recently, Russell and Sun [22] used Condition (IV) to show that, for all integers d∈Nd\in {\mathbb{N}}, n1,…,nd∈N0{n}_{1},\ldots ,{n}_{d}\in {{\mathbb{N}}}_{0}and k∈{1,…,d−1}k\in \left\{1,\ldots ,d-1\right\}, and up to a change of sign for some of the components of X{\boldsymbol{X}}, one has (3)E∏i=1dXi2ni≥E∏i=1kXi2niE∏i=k+1dXi2ni.{E}\left(\mathop{\prod }\limits_{i=1}^{d}{X}_{i}^{2{n}_{i}}\right)\ge {E}\left(\mathop{\prod }\limits_{i=1}^{k}{X}_{i}^{2{n}_{i}}\right){E}\left(\mathop{\prod }\limits_{i=k+1}^{d}{X}_{i}^{2{n}_{i}}\right).This result was further extended by Edelmann et al. [4] to the case where the random vector (X12,…,Xd2)\left({X}_{1}^{2},\ldots ,{X}_{d}^{2})has a multivariate gamma distribution in the sense of Krishnamoorthy and Parthasarathy [10]. See also [3] for a use of Condition (IV) in the context of the Gaussian correlation inequality (GCI) conjecture.In the following section, it will be shown how Condition (III) can be exploited to give a combinatorial proof of a weak form of inequality (3). It will then be seen in Section 4 that Condition (II) implies Condition (III), thereby proving the implications illustrated in Figure 1 between Conditions (I)–(IV). That the implications Condition (II) ⇒\Rightarrow Condition (III) and Condition (III) ⇒\Rightarrow Condition (IV) are strict can be checked using, respectively, the covariance matrices var(X)=3/29/89/89/821/163/49/83/421/16=11/21/21/211/41/21/4111/21/21/211/41/21/41{\rm{var}}\left({\boldsymbol{X}})=\left(\begin{array}{ccc}3\hspace{0.1em}\text{/}\hspace{0.1em}2& 9\hspace{0.1em}\text{/}\hspace{0.1em}8& 9\hspace{0.1em}\text{/}\hspace{0.1em}8\\ 9\hspace{0.1em}\text{/}\hspace{0.1em}8& 21\hspace{0.1em}\text{/}\hspace{0.1em}16& 3\hspace{0.1em}\text{/}\hspace{0.1em}4\\ 9\hspace{0.1em}\text{/}\hspace{0.1em}8& 3\hspace{0.1em}\text{/}\hspace{0.1em}4& 21\hspace{0.1em}\text{/}\hspace{0.1em}16\end{array}\right)=\left(\begin{array}{ccc}1& 1\hspace{0.1em}\text{/}\hspace{0.1em}2& 1\hspace{0.1em}\text{/}\hspace{0.1em}2\\ 1\hspace{0.1em}\text{/}\hspace{0.1em}2& 1& 1\hspace{0.1em}\text{/}\hspace{0.1em}4\\ 1\hspace{0.1em}\text{/}\hspace{0.1em}2& 1\hspace{0.1em}\text{/}\hspace{0.1em}4& 1\end{array}\right)\left(\begin{array}{ccc}1& 1\hspace{0.1em}\text{/}\hspace{0.1em}2& 1\hspace{0.1em}\text{/}\hspace{0.1em}2\\ 1\hspace{0.1em}\text{/}\hspace{0.1em}2& 1& 1\hspace{0.1em}\text{/}\hspace{0.1em}4\\ 1\hspace{0.1em}\text{/}\hspace{0.1em}2& 1\hspace{0.1em}\text{/}\hspace{0.1em}4& 1\end{array}\right)and var(X)=1001/21/2013/401/203/411/201/201/2101/21/2001+ε1000001000001000001000001,{\rm{var}}\left({\boldsymbol{X}})=\left(\begin{array}{ccccc}1& 0& 0& 1\hspace{0.1em}\text{/}\hspace{0.1em}2& 1\hspace{0.1em}\text{/}\hspace{0.1em}2\\ 0& 1& 3\hspace{0.1em}\text{/}\hspace{0.1em}4& 0& 1\hspace{0.1em}\text{/}\hspace{0.1em}2\\ 0& 3\hspace{0.1em}\text{/}\hspace{0.1em}4& 1& 1\hspace{0.1em}\text{/}\hspace{0.1em}2& 0\\ 1\hspace{0.1em}\text{/}\hspace{0.1em}2& 0& 1\hspace{0.1em}\text{/}\hspace{0.1em}2& 1& 0\\ 1\hspace{0.1em}\text{/}\hspace{0.1em}2& 1\hspace{0.1em}\text{/}\hspace{0.1em}2& 0& 0& 1\end{array}\right)+\varepsilon \hspace{0.33em}\left(\begin{array}{ccccc}1& 0& 0& 0& 0\\ 0& 1& 0& 0& 0\\ 0& 0& 1& 0& 0\\ 0& 0& 0& 1& 0\\ 0& 0& 0& 0& 1\end{array}\right),for some appropriate ε∈(0,∞)\varepsilon \in \left(0,\infty ).Figure 1Implications between Conditions (I)–(IV) for a nonsingular centered Gaussian random vector X{\boldsymbol{X}}, with references.In the first example, the matrix var(X){\rm{var}}\left({\boldsymbol{X}})is completely positive (meaning that it can be written as CC⊤C{C}^{\top }for some matrix CCwith nonnegative entries) and positive definite by construction. Furthermore, the matrix Dvar(X)−1DD\hspace{0.33em}{\rm{var}}{\left({\boldsymbol{X}})}^{-1}Dhas at least one positive off-diagonal element for any of the eight possible choices of 3×33\times 3signature matrix DD. Another way to see this is to observe that if A=var(X)−1A={\rm{var}}{\left({\boldsymbol{X}})}^{-1}, then the cyclic product a12a23a31{a}_{12}{a}_{23}{a}_{31}, which is invariant to A↦DADA\mapsto DAD, is strictly positive in the aforementioned example, so that the off-diagonal elements of var(X)−1{\rm{var}}{\left({\boldsymbol{X}})}^{-1}cannot all be nonpositive. This shows that (III) ⇏\Rightarrow &#x0338;(II). This example was adapted from ideas communicated to the authors by Thomas Royen.For the second example, when ε=0\varepsilon =0, it is mentioned by Maxfield and Minc [15], using a result from Hall [6], that the matrix is positive semidefinite and has only nonnegative elements but that it is not completely positive. Using the fact that the set of 5×55\times 5completely positive matrices is closed, there exists ε∈(0,∞)\varepsilon \in \left(0,\infty )small enough that the matrix var(X){\rm{var}}\left({\boldsymbol{X}})is positive definite and has only nonnegative elements but is not completely positive. More generally, given that the elements of var(X){\rm{var}}\left({\boldsymbol{X}})are all nonnegative, the matrix Dvar(X)DD\hspace{0.33em}{\rm{var}}\left({\boldsymbol{X}})Dis not completely positive for any of the 32 possible choices of 5×55\times 5signature matrix DD, which shows that (IV) ⇏\Rightarrow &#x0338;(III). This idea was adapted from comments by Stein [24].3A combinatorial proof of the GPI conjectureThe following result, which is this paper’s main result, shows that the extended GPI conjecture of Li and Wei [13] given in (2) holds true under Condition (III) when the reals α1,…,αd{\alpha }_{1},\ldots ,{\alpha }_{d}are nonnegative integers. This result also follows from inequality (3), due to Russell and Sun [22], but the argument below is completely different from the latter authors’ derivation based on Condition (IV).Proposition 2Let X=(X1,…,Xd){\boldsymbol{X}}=\left({X}_{1},\ldots ,{X}_{d})be a d-variate centered Gaussian random vector. Assume that there exist a d×dd\times dsignature matrix DDand a d×dd\times dmatrix C with entries in [0,∞){[}0,\infty )such that the random vector DXD{\boldsymbol{X}}has the same distribution as the random vector CZC{\boldsymbol{Z}}, where Z∼Nd(0d,Id){\boldsymbol{Z}}\hspace{0.33em} \sim \hspace{0.33em}{{\mathcal{N}}}_{d}\left({{\bf{0}}}_{d},{{\boldsymbol{I}}}_{d})is a dd-dimensional standard Gaussian random vector. Then, for all integers n1,…,nd∈N0{n}_{1},\ldots ,{n}_{d}\in {{\mathbb{N}}}_{0}, E∏i=1dXi2ni≥∏i=1dE(Xi2ni).{E}\left(\mathop{\prod }\limits_{i=1}^{d}{X}_{i}^{2{n}_{i}}\right)\ge \mathop{\prod }\limits_{i=1}^{d}{E}({X}_{i}^{2{n}_{i}}).ProofIn terms of Z{\boldsymbol{Z}}, the claimed inequality is equivalent to (4)E∏i=1d∑j=1dcijZj2ni≥∏i=1dE∑j=1dcijZj2ni.{E}\left\{\mathop{\prod }\limits_{i=1}^{d}{\left(\mathop{\sum }\limits_{j=1}^{d}{c}_{ij}{Z}_{j}\right)}^{2{n}_{i}}\right\}\ge \mathop{\prod }\limits_{i=1}^{d}{E}\left\{{\left(\mathop{\sum }\limits_{j=1}^{d}{c}_{ij}{Z}_{j}\right)}^{2{n}_{i}}\right\}.For each integer j∈{1,…,d}j\in \left\{1,\ldots ,d\right\}, set Kj=k1j+⋯+kdj{K}_{j}={k}_{1j}+\cdots +{k}_{dj}and Lj=ℓ1j+⋯+ℓdj{L}_{j}={\ell }_{1j}+\cdots +{\ell }_{dj}, where kij{k}_{ij}and ℓij{\ell }_{ij}are nonnegative integer-valued indices to be used in expressions (5) and (6).By the multinomial formula, the left-hand side of inequality (4) can be expanded as follows: E∏i=1d∑ki∈N0d:ki1+⋯+kid=2ni2niki1,…,kid∏j=1dcijkijZjkij.{E}\left\{\mathop{\prod }\limits_{i=1}^{d}\sum _{\begin{array}{c}{{\boldsymbol{k}}}_{i}\in {{\mathbb{N}}}_{0}^{d}:\\ {k}_{i1}+\cdots +{k}_{id}=2{n}_{i}\end{array}}\left(\genfrac{}{}{0.0pt}{}{2{n}_{i}}{{k}_{i1},\ldots ,{k}_{id}}\right)\mathop{\prod }\limits_{j=1}^{d}{c}_{ij}^{{k}_{ij}}{Z}_{j}^{{k}_{ij}}\right\}.Calling on the linearity of expectations and the mutual independence of the components of the random vector Z{\boldsymbol{Z}}, one can then rewrite this expression as follows: E∑k1∈N0d:k11+⋯+k1d=2n1…∑kd∈N0d:kd1+⋯+kdd=2nd∏i=1d2niki1,…,kid∏j=1dcijkijZjkij=∑k1∈N0d:k11+⋯+k1d=2n1…∑kd∈N0d:kd1+⋯+kdd=2nd∏j=1dE(ZjKj)∏i=1d2niki1,…,kid∏j=1dcijkij.\begin{array}{l}{E}\left\{\displaystyle \sum _{\begin{array}{c}{{\boldsymbol{k}}}_{1}\in {{\mathbb{N}}}_{0}^{d}:\\ {k}_{11}+\cdots +{k}_{1d}=2{n}_{1}\end{array}}\ldots \displaystyle \sum _{\begin{array}{c}{{\boldsymbol{k}}}_{d}\in {{\mathbb{N}}}_{0}^{d}:\\ {k}_{d1}+\cdots +{k}_{dd}=2{n}_{d}\end{array}}\mathop{\displaystyle \prod }\limits_{i=1}^{d}\left(\genfrac{}{}{0.0pt}{}{2{n}_{i}}{{k}_{i1},\ldots ,{k}_{id}}\right)\mathop{\displaystyle \prod }\limits_{j=1}^{d}{c}_{ij}^{{k}_{ij}}{Z}_{j}^{{k}_{ij}}\right\}\\ \hspace{1.0em}=\displaystyle \sum _{\begin{array}{c}{{\boldsymbol{k}}}_{1}\in {{\mathbb{N}}}_{0}^{d}:\\ {k}_{11}+\cdots +{k}_{1d}=2{n}_{1}\end{array}}\ldots \displaystyle \sum _{\begin{array}{c}{{\boldsymbol{k}}}_{d}\in {{\mathbb{N}}}_{0}^{d}:\\ {k}_{d1}+\cdots +{k}_{dd}=2{n}_{d}\end{array}}\left\{\mathop{\displaystyle \prod }\limits_{j=1}^{d}{E}({Z}_{j}^{{K}_{j}})\right\}\mathop{\displaystyle \prod }\limits_{i=1}^{d}\left(\genfrac{}{}{0.0pt}{}{2{n}_{i}}{{k}_{i1},\ldots ,{k}_{id}}\right)\mathop{\displaystyle \prod }\limits_{j=1}^{d}{c}_{ij}^{{k}_{ij}}.\end{array}Given that the coefficients cij{c}_{ij}are all nonnegative by assumption, and exploiting the fact that, for every integer j∈{1,…,d}j\in \left\{1,\ldots ,d\right\}and m∈N0m\in {{\mathbb{N}}}_{0}, E(Zj2m)=(2m)!2mm!,{E}({Z}_{j}^{2m})=\frac{\left(2m)\hspace{0.1em}\text{&#x0021;}}{{2}^{m}m\text{&#x0021;}\hspace{0.1em}},one can bound the left-hand side of inequality (4) from below by (5)∑ℓ1∈N0d:2ℓ11+⋯+2ℓ1d=2n1…∑ℓd∈N0d:2ℓd1+⋯+2ℓdd=2nd∏j=1dE(Zj2Lj)∏i=1d2ni2ℓi1,…,2ℓid∏j=1dcij2ℓij=∑ℓ1∈N0d:ℓ11+⋯+ℓ1d=n1…∑ℓd∈N0d:ℓd1+⋯+ℓdd=nd∏j=1d(2Lj)!2LjLj!∏i=1d2ni2ℓi1,…,2ℓid∏j=1dcij2ℓij.\begin{array}{l}\displaystyle \sum _{\begin{array}{c}{{\boldsymbol{\ell }}}_{1}\in {{\mathbb{N}}}_{0}^{d}:\\ 2{\ell }_{11}+\cdots +2{\ell }_{1d}=2{n}_{1}\end{array}}\ldots \displaystyle \sum _{\begin{array}{c}{{\boldsymbol{\ell }}}_{d}\in {{\mathbb{N}}}_{0}^{d}:\\ 2{\ell }_{d1}+\cdots +2{\ell }_{dd}=2{n}_{d}\end{array}}\left\{\mathop{\displaystyle \prod }\limits_{j=1}^{d}{E}({Z}_{j}^{2{L}_{j}})\right\}\mathop{\displaystyle \prod }\limits_{i=1}^{d}\left(\genfrac{}{}{0.0pt}{}{2{n}_{i}}{2{\ell }_{i1},\ldots ,2{\ell }_{id}}\right)\mathop{\displaystyle \prod }\limits_{j=1}^{d}{c}_{ij}^{2{\ell }_{ij}}\\ \hspace{1.0em}=\displaystyle \sum _{\begin{array}{c}{{\boldsymbol{\ell }}}_{1}\in {{\mathbb{N}}}_{0}^{d}:\\ {\ell }_{11}+\cdots +{\ell }_{1d}={n}_{1}\end{array}}\ldots \displaystyle \sum _{\begin{array}{c}{{\boldsymbol{\ell }}}_{d}\in {{\mathbb{N}}}_{0}^{d}:\\ {\ell }_{d1}+\cdots +{\ell }_{dd}={n}_{d}\end{array}}\left\{\mathop{\displaystyle \prod }\limits_{j=1}^{d}\frac{\left(2{L}_{j})\hspace{0.1em}\text{&#x0021;}}{{2}^{{L}_{j}}{L}_{j}\text{&#x0021;}\hspace{0.1em}}\right\}\mathop{\displaystyle \prod }\limits_{i=1}^{d}\left(\genfrac{}{}{0.0pt}{}{2{n}_{i}}{2{\ell }_{i1},\ldots ,2{\ell }_{id}}\right)\mathop{\displaystyle \prod }\limits_{j=1}^{d}{c}_{ij}^{2{\ell }_{ij}}.\end{array}The right-hand side of (4) can be expanded in a similar way. Upon using the fact that E(Y2m)=(2m)!σ2m/(2mm!){E}\left({Y}^{2m})=\left(2m)\hspace{0.1em}\text{&#x0021;}{\sigma }^{2m}\text{/}\hspace{0.1em}\left({2}^{m}m\hspace{0.1em}\text{&#x0021;}\hspace{0.1em})for every integer m∈N0m\in {{\mathbb{N}}}_{0}when Y∼N(0,σ2)Y \sim {\mathcal{N}}\left(0,{\sigma }^{2}), one finds (6)∏i=1dE∑j=1dcijZj2ni=∏i=1d(2ni)!2nini!∑j=1dcij2ni=∏i=1d(2ni)!2nini!∑ℓi∈N0d:ℓi1+⋯+ℓid=niniℓi1,…,ℓid∏j=1dcij2ℓij=∑ℓ1∈N0d:ℓ11+⋯+ℓ1d=n1…∑ℓd∈N0d:ℓd1+⋯+ℓdd=nd∏i=1d(2ni)!2nini!niℓi1,…,ℓid∏j=1dcij2ℓij.\begin{array}{l}\mathop{\displaystyle \prod }\limits_{i=1}^{d}{E}\left\{{\left(\mathop{\displaystyle \sum }\limits_{j=1}^{d}{c}_{ij}{Z}_{j}\right)}^{2{n}_{i}}\right\}=\mathop{\displaystyle \prod }\limits_{i=1}^{d}\frac{\left(2{n}_{i})\hspace{0.1em}\text{&#x0021;}}{{2}^{{n}_{i}}{n}_{i}\text{&#x0021;}\hspace{0.1em}}{\left(\mathop{\displaystyle \sum }\limits_{j=1}^{d}{c}_{ij}^{2}\right)}^{{n}_{i}}=\mathop{\displaystyle \prod }\limits_{i=1}^{d}\frac{\left(2{n}_{i})\hspace{0.1em}\text{&#x0021;}}{{2}^{{n}_{i}}{n}_{i}\text{&#x0021;}\hspace{0.1em}}\displaystyle \sum _{\begin{array}{c}{{\boldsymbol{\ell }}}_{i}\in {{\mathbb{N}}}_{0}^{d}:\\ {\ell }_{i1}+\cdots +{\ell }_{id}={n}_{i}\end{array}}\left(\genfrac{}{}{0.0pt}{}{{n}_{i}}{{\ell }_{i1},\ldots ,{\ell }_{id}}\right)\mathop{\displaystyle \prod }\limits_{j=1}^{d}{c}_{ij}^{2{\ell }_{ij}}\\ \hspace{1.0em}=\displaystyle \sum _{\begin{array}{c}{{\boldsymbol{\ell }}}_{1}\in {{\mathbb{N}}}_{0}^{d}:\\ {\ell }_{11}+\cdots +{\ell }_{1d}={n}_{1}\end{array}}\ldots \displaystyle \sum _{\begin{array}{c}{{\boldsymbol{\ell }}}_{d}\in {{\mathbb{N}}}_{0}^{d}:\\ {\ell }_{d1}+\cdots +{\ell }_{dd}={n}_{d}\end{array}}\mathop{\displaystyle \prod }\limits_{i=1}^{d}\frac{\left(2{n}_{i})\hspace{0.1em}\text{&#x0021;}}{{2}^{{n}_{i}}{n}_{i}\text{&#x0021;}\hspace{0.1em}}\left(\genfrac{}{}{0.0pt}{}{{n}_{i}}{{\ell }_{i1},\ldots ,{\ell }_{id}}\right)\mathop{\displaystyle \prod }\limits_{j=1}^{d}{c}_{ij}^{2{\ell }_{ij}}.\end{array}Next, compare the coefficients of the corresponding powers cij2ℓij{c}_{ij}^{2{\ell }_{ij}}in expressions (5) and (6). To prove inequality (4), it suffices to show that, for all integer-valued vectors ℓ1,…,ℓd∈N0d{{\boldsymbol{\ell }}}_{1},\ldots ,{{\boldsymbol{\ell }}}_{d}\in {{\mathbb{N}}}_{0}^{d}satisfying ℓi1+⋯+ℓid=ni{\ell }_{i1}+\cdots +{\ell }_{id}={n}_{i}for every integer i∈{1,…,d}i\in \left\{1,\ldots ,d\right\}, one has ∏j=1d(2Lj)!2LjLj!∏i=1d2ni2ℓi1,…,2ℓid≥∏i=1d(2ni)!2nini!niℓi1,…,ℓid.\left\{\mathop{\prod }\limits_{j=1}^{d}\frac{\left(2{L}_{j})\hspace{0.1em}\text{&#x0021;}}{{2}^{{L}_{j}}{L}_{j}\text{&#x0021;}\hspace{0.1em}}\right\}\mathop{\prod }\limits_{i=1}^{d}\left(\genfrac{}{}{0.0pt}{}{2{n}_{i}}{2{\ell }_{i1},\ldots ,2{\ell }_{id}}\right)\ge \mathop{\prod }\limits_{i=1}^{d}\frac{\left(2{n}_{i})\hspace{0.1em}\text{&#x0021;}}{{2}^{{n}_{i}}{n}_{i}\text{&#x0021;}\hspace{0.1em}}\left(\genfrac{}{}{0.0pt}{}{{n}_{i}}{{\ell }_{i1},\ldots ,{\ell }_{id}}\right).Taking into account the fact that 2L1+⋯+Ld=2n1+⋯+nd{2}^{{L}_{1}+\cdots +{L}_{d}}={2}^{{n}_{1}+\cdots +{n}_{d}}, and after cancelling some factorials, one finds that the aforementioned inequality reduces to (7)∏j=1d(2Lj)!∏i=1d(2ℓij)!≥∏j=1dLj!∏i=1dℓij!.\mathop{\prod }\limits_{j=1}^{d}\frac{\left(2{L}_{j})\hspace{0.1em}\text{&#x0021;}\hspace{0.1em}}{\mathop{\prod }\limits_{i=1}^{d}\left(2{\ell }_{ij})\hspace{0.1em}\text{&#x0021;}}\ge \mathop{\prod }\limits_{j=1}^{d}\frac{{L}_{j}\text{&#x0021;}}{{\prod }_{i=1}^{d}{\ell }_{ij}\text{&#x0021;}\hspace{0.1em}}.Therefore, the proof is complete if one can establish inequality (7). To this end, one can assume without loss of generality that the integers L1,…,Ld{L}_{1},\ldots ,{L}_{d}are all nonzero; otherwise, inequality (7) reduces to a lower dimensional case. For any given integers L1,…,Ld∈N{L}_{1},\ldots ,{L}_{d}\in {\mathbb{N}}and every integer j∈{1,…,d}j\in \left\{1,\ldots ,d\right\}, define the function a↦gj(a)=Γ(aLj+1)∏i=1dΓ(aℓij+1),a\mapsto {g}_{j}\left(a)=\frac{\Gamma \left(a{L}_{j}+1)}{{\prod }_{i=1}^{d}\Gamma \left(a{\ell }_{ij}+1)},on the interval (−1/Lj,∞)\left(-1\hspace{0.1em}\text{/}\hspace{0.1em}{L}_{j},\infty ), where Γ\Gamma denotes Euler’s gamma function.To prove inequality (7), it thus suffices to show that, for every integer j∈{1,…,d}j\in \left\{1,\ldots ,d\right\}, the map a↦ln{gj(a)}a\mapsto \mathrm{ln}\left\{{g}_{j}\left(a)\right\}is nondecreasing on the interval [0,∞){[}0,\infty ). Direct computations yield, for every real a∈[0,∞)a\in {[}0,\infty ), ddaln{gj(a)}=Ljψ(aLj+1)−∑i=1dℓijψ(aℓij+1),d2da2ln{gj(a)}=Lj2ψ′(aLj+1)−∑i=1dℓij2ψ′(aℓij+1),\begin{array}{rcl}\frac{{\rm{d}}}{{\rm{d}}a}\mathrm{ln}\left\{{g}_{j}\left(a)\right\}& =& {L}_{j}\psi \left(a{L}_{j}+1)-\mathop{\displaystyle \sum }\limits_{i=1}^{d}{\ell }_{ij}\psi \left(a{\ell }_{ij}+1),\\ \frac{{{\rm{d}}}^{2}}{{\rm{d}}{a}^{2}}\mathrm{ln}\left\{{g}_{j}\left(a)\right\}& =& {L}_{j}^{2}\psi ^{\prime} \left(a{L}_{j}+1)-\mathop{\displaystyle \sum }\limits_{i=1}^{d}{\ell }_{ij}^{2}\psi ^{\prime} \left(a{\ell }_{ij}+1),\end{array}where ψ=(lnΓ)′\psi =\left(\mathrm{ln}\Gamma )^{\prime} denotes the digamma function. Now call on the integral representation [1, p. 260]ψ′(z)=∫0∞te−(z−1)tet−1dt,\psi ^{\prime} \left(z)=\underset{0}{\overset{\infty }{\int }}\frac{t{{\rm{e}}}^{-\left(z-1)t}}{{{\rm{e}}}^{t}-1}{\rm{d}}t,valid for every real z∈(0,∞)z\in \left(0,\infty ), to write (8)d2da2ln{gj(a)}=∫0∞(Ljt)e−a(Ljt)et−1Ljdt−∑i=1d∫0∞(ℓijt)e−a(ℓijt)et−1ℓijdt=∫0∞se−as1es/Lj−1−∑i=1d1(es/Lj)Lj/ℓij−1ds.\frac{{{\rm{d}}}^{2}}{{\rm{d}}{a}^{2}}\mathrm{ln}\left\{{g}_{j}\left(a)\right\}=\underset{0}{\overset{\infty }{\int }}\frac{\left({L}_{j}t){{\rm{e}}}^{-a\left({L}_{j}t)}}{{{\rm{e}}}^{t}-1}\hspace{0.33em}{L}_{j}{\rm{d}}t-\mathop{\sum }\limits_{i=1}^{d}\underset{0}{\overset{\infty }{\int }}\frac{\left({\ell }_{ij}t){{\rm{e}}}^{-a\left({\ell }_{ij}t)}}{{{\rm{e}}}^{t}-1}\hspace{0.33em}{\ell }_{ij}{\rm{d}}t=\underset{0}{\overset{\infty }{\int }}s{{\rm{e}}}^{-as}\left\{\frac{1}{{{\rm{e}}}^{s\text{/}{L}_{j}}-1}-\mathop{\sum }\limits_{i=1}^{d}\frac{1}{{\left({{\rm{e}}}^{s\text{/}{L}_{j}})}^{{L}_{j}\text{/}{\ell }_{ij}}-1}\right\}{\rm{d}}s.Given that (ℓ1j+⋯+ℓdj)/Lj=1\left({\ell }_{1j}+\cdots +{\ell }_{dj})\hspace{0.1em}\text{/}\hspace{0.1em}{L}_{j}=1by construction, the quantity within braces in equation (8) is always nonnegative by Lemma 1.4 of Ouimet [16]; this can be checked upon setting y=es/Ljy={{\rm{e}}}^{s\text{/}{L}_{j}}and ui=ℓij/Lj{u}_{i}={\ell }_{ij}\hspace{0.1em}\text{/}\hspace{0.1em}{L}_{j}for every integer i∈{1,…,d}i\in \left\{1,\ldots ,d\right\}in that paper’s notation. Alternatively, see p. 516 in the study by Qi et al. [17]. Therefore, (9)∀a∈[0,∞)d2da2ln{gj(a)}≥0.{\forall }_{a\in {[}0,\infty )}\hspace{1.0em}\frac{{{\rm{d}}}^{2}}{{\rm{d}}{a}^{2}}\mathrm{ln}\left\{{g}_{j}\left(a)\right\}\ge 0.In fact, the map a↦d2ln{gj(a)}/da2a\mapsto {d}^{2}\mathrm{ln}\left\{{g}_{j}\left(a)\right\}\hspace{0.1em}\text{/}\hspace{0.1em}{\rm{d}}{a}^{2}is even completely monotonic. Moreover, given that ddaln{gj(a)}a=0=Ljψ(1)−∑i=1dℓijψ(1)=0×ψ(1)=0,{\left.\frac{{\rm{d}}}{{\rm{d}}a}\mathrm{ln}\left\{{g}_{j}\left(a)\right\}\right|}_{a=0}={L}_{j}\psi \left(1)-\mathop{\sum }\limits_{i=1}^{d}{\ell }_{ij}\psi \left(1)=0\times \psi \left(1)=0,one can deduce from inequality (9) that ∀a∈[0,∞)ddaln{gj(a)}≥0.{\forall }_{a\in {[}0,\infty )}\hspace{1.0em}\frac{{\rm{d}}}{{\rm{d}}a}\mathrm{ln}\left\{{g}_{j}\left(a)\right\}\ge 0.Hence, the map a↦ln{gj(a)}a\mapsto \mathrm{ln}\left\{{g}_{j}\left(a)\right\}is nondecreasing on [0,∞){[}0,\infty ). This concludes the argument.□4Condition (II) implies Condition (III)This paper’s second result, stated below, shows that Condition (II) implies Condition (III). In view of Figure 1, one may then conclude that Condition (I) also implies Conditions (III) and (IV), and hence, also Condition (II) implies Condition (IV). The implication Condition (II) ⇒\Rightarrow Condition (IV) was already established in Theorem 2 (i) of Karlin and Rinott [9], and its strictness was mentioned on top of p. 427 of the same paper.Proposition 3Let Σ\Sigma be a symmetric positive definite matrix with Cholesky decomposition Σ=CC⊤\Sigma =C{C}^{\top }. If the off-diagonal entries of Σ−1{\Sigma }^{-1}are all nonpositive, then the elements of C are all nonnegative.ProofThe proof is by induction on the dimension ddof Σ\Sigma . The claim trivially holds when d=1d=1. Assume that it is verified for some integer n∈Nn\in {\mathbb{N}}, and fix d=n+1d=n+1. Given the assumptions on Σ\Sigma , one can write Σ−1=av⊤vB{\Sigma }^{-1}=\left(\begin{array}{cc}a& {{\boldsymbol{v}}}^{\top }\\ {\boldsymbol{v}}& B\end{array}\right)in terms of a real a∈(0,∞)a\in \left(0,\infty ), an n×1n\times 1vector v{\boldsymbol{v}}with nonpositive components, and an n×nn\times nmatrix BBwith nonpositive off-diagonal entries.Given that Σ\Sigma is symmetric positive definite by assumption, so is Σ−1{\Sigma }^{-1}, and hence so are BBand B−1{B}^{-1}. Moreover, the off-diagonal entries of B=(B−1)−1B={\left({B}^{-1})}^{-1}are nonpositive, and hence by induction, the factor LLin the Cholesky decomposition B−1=LL⊤{B}^{-1}=L{L}^{\top }has nonnegative entries. Letting w=a−v⊤LL⊤vw=a-{{\boldsymbol{v}}}^{\top }L{L}^{\top }{\boldsymbol{v}}denote the Schur complement, which is strictly positive, one has Σ−1=av⊤v(LL⊤)−1=wv⊤L0n(L⊤)−1w0n⊤L⊤vL−1,{\Sigma }^{-1}=\left(\begin{array}{cc}a& {{\boldsymbol{v}}}^{\top }\\ {\boldsymbol{v}}& {\left(L{L}^{\top })}^{-1}\end{array}\right)=\left(\begin{array}{cc}\sqrt{w}& {{\boldsymbol{v}}}^{\top }L\\ {{\bf{0}}}_{n}& {\left({L}^{\top })}^{-1}\end{array}\right)\left(\begin{array}{cc}\sqrt{w}& {{\bf{0}}}_{n}^{\top }\\ {L}^{\top }{\boldsymbol{v}}& {L}^{-1}\end{array}\right),where 0n{{\bf{0}}}_{n}is an n×1n\times 1vector of zeros. Accordingly, Σ=w0n⊤L⊤vL−1−1wv⊤L0n(L⊤)−1−1=1/w0n⊤−LL⊤v/wL1/w0n⊤−LL⊤v/wL⊤=CC⊤.\Sigma ={\left(\begin{array}{cc}\sqrt{w}& {{\bf{0}}}_{n}^{\top }\\ {L}^{\top }{\boldsymbol{v}}& {L}^{-1}\end{array}\right)}^{-1}{\left(\begin{array}{cc}\sqrt{w}& {{\boldsymbol{v}}}^{\top }L\\ {{\bf{0}}}_{n}& {\left({L}^{\top })}^{-1}\end{array}\right)}^{-1}=\left(\begin{array}{cc}1\text{/}\sqrt{w}& {{\bf{0}}}_{n}^{\top }\\ -L{L}^{\top }{\boldsymbol{v}}\text{/}\sqrt{w}& L\end{array}\right){\left(\begin{array}{cc}1\text{/}\sqrt{w}& {{\bf{0}}}_{n}^{\top }\\ -L{L}^{\top }{\boldsymbol{v}}\text{/}\sqrt{w}& L\end{array}\right)}^{\top }=C{C}^{\top }.Recall that wwis strictly positive and all the entries of LLand −v-{\boldsymbol{v}}are nonnegative. Hence, all the elements of CCare nonnegative, and the argument is complete.□5DiscussionThis paper shows that the Gaussian product inequality (2) holds under Condition (III) when the reals α1,…,αd{\alpha }_{1},\ldots ,{\alpha }_{d}are nonnegative integers. This assumption is further seen to be strictly weaker than Condition (II). It thus follows from the implications in Figure 1 that when the reals α1,…,αd{\alpha }_{1},\ldots ,{\alpha }_{d}are nonnegative integers, inequality (2) holds more generally than under the MTP2{\text{MTP}}_{2}condition of Karlin and Rinott [8].Shortly after the first draft of this article was posted on arXiv, extensions of Proposition 2 were announced by Russell and Sun [22] and Edelmann et al. [4]; see Lemma 2.3 and Theorem 2.1, respectively, in their manuscripts. Beyond priority claims, which are nugatory, the originality of the present work lies mainly in its method of proof, and in the clarification it provides of the relationship between various assumptions made in the relevant literature, as summarized by Figure 1.Beyond its intrinsic interest, the approach to the proof of the GPI presented herein, together with its link to the complete monotonicity of multinomial probabilities previously shown by Ouimet [16] and Qi et al. [17], hints to a deep relationship between the MTP2{\text{MTP}}_{2}class for the multivariate gamma distribution of Krishnamoorthy and Parthasarathy [10], the range of admissible parameter values for their infinite divisibility, and the complete monotonicity of their Laplace transform; see the work by Royen on the GCI conjecture [18,19, 20,21] and Theorems 1.2 and 1.3 of Scott and Sokal [23]. These topics, and the proof or refutation of the GPI in its full generality, provide interesting avenues for future research.AppendixTechnical lemmaThe following result, used in the proof of Proposition 2, extends Lemma 1 of Alzer [2] from the case d=1d=1to an arbitrary integer d∈Nd\in {\mathbb{N}}. It was already reported by Ouimet [16], see his Lemma 4.1, but its short statement and proof are included here to make the article more self-contained.Lemma A.1For every integer d∈Nd\in {\mathbb{N}}, and real numbers y∈(1,∞)y\in \left(1,\infty )and u1,…,ud+1∈(0,1){u}_{1},\ldots ,{u}_{d+1}\in \left(0,1)such that u1+⋯+ud+1=1{u}_{1}+\cdots +{u}_{d+1}=1, one has(A.1)1y−1>∑i=1d+11y1/ui−1.\frac{1}{y-1}\gt \mathop{\sum }\limits_{i=1}^{d+1}\frac{1}{{y}^{1\text{/}{u}_{i}}-1}.ProofThe proof is by induction on the integer dd. The case d=1d=1is the statement of Lemma 1 of Alzer [2]. Fix an integer d≥2d\ge 2and assume that inequality (A.1) holds for every smaller integer. Fix any reals y∈(1,∞)y\in \left(1,\infty )and u1,…,ud∈(0,1){u}_{1},\ldots ,{u}_{d}\in \left(0,1)such that ‖u‖1=u1+⋯+ud<1\Vert {\boldsymbol{u}}{\Vert }_{1}={u}_{1}+\cdots +{u}_{d}\lt 1. Write ud+1=1−‖u‖1>0{u}_{d+1}=1-\Vert {\boldsymbol{u}}{\Vert }_{1}\gt 0. Calling on Alzer’s result, one has 1y−1>1y1/‖u‖1−1+1y1/(1−‖u‖1)−1.\frac{1}{y-1}\gt \frac{1}{{y}^{1\text{/}\Vert {\boldsymbol{u}}{\Vert }_{1}}-1}+\frac{1}{{y}^{1\text{/}\left(1-\Vert {\boldsymbol{u}}{\Vert }_{1})}-1}.Therefore, the conclusion follows if one can show that 1y1/‖u‖1−1>∑i=1d1y1/ui−1.\frac{1}{{y}^{1\text{/}\Vert {\boldsymbol{u}}{\Vert }_{1}}-1}\gt \mathop{\sum }\limits_{i=1}^{d}\frac{1}{{y}^{1\text{/}{u}_{i}}-1}.Upon setting z=y1/‖u‖1z={y}^{1\text{/}\Vert {\boldsymbol{u}}{\Vert }_{1}}and vi=ui/‖u‖1{v}_{i}={u}_{i}\hspace{0.1em}\text{/}\hspace{0.1em}\Vert {\boldsymbol{u}}{\Vert }_{1}, one finds that the aforementioned inequality is equivalent to 1z−1>∑i=1d1z1/vi−1,\frac{1}{z-1}\gt \mathop{\sum }\limits_{i=1}^{d}\frac{1}{{z}^{1\text{/}{v}_{i}}-1},which is true by the induction assumption. Therefore, the argument is complete.□ http://www.deepdyve.com/assets/images/DeepDyve-Logo-lg.png Dependence Modeling de Gruyter

A combinatorial proof of the Gaussian product inequality beyond the MTP2 case

Dependence Modeling , Volume 10 (1): 9 – Jan 1, 2022

Loading next page...
 
/lp/de-gruyter/a-combinatorial-proof-of-the-gaussian-product-inequality-beyond-the-MvSzvUhlgS
Publisher
de Gruyter
Copyright
© 2022 Christian Genest and Frédéric Ouimet, published by De Gruyter
ISSN
2300-2298
eISSN
2300-2298
DOI
10.1515/demo-2022-0116
Publisher site
See Article on Publisher Site

Abstract

1IntroductionThe Gaussian product inequality (GPI) is a long-standing conjecture, which states that for any centered Gaussian random vector X=(X1,…,Xd){\boldsymbol{X}}=\left({X}_{1},\ldots ,{X}_{d})of dimension d∈N={1,2,…}d\in {\mathbb{N}}=\left\{1,2,\ldots \right\}and every integer m∈Nm\in {\mathbb{N}}, one has (1)E∏i=1dXi2m≥∏i=1dE(Xi2m).{E}\left(\mathop{\prod }\limits_{i=1}^{d}{X}_{i}^{2m}\right)\ge \mathop{\prod }\limits_{i=1}^{d}{E}({X}_{i}^{2m}).This inequality is known to imply the real polarization problem conjecture in functional analysis [14], and it is related to the so-called UU-conjecture to the effect that if PPand QQare two nonconstant polynomials on Rd{{\mathbb{R}}}^{d}such that the random variables P(X)P\left({\boldsymbol{X}})and Q(X)Q\left({\boldsymbol{X}})are independent, then there exist an orthogonal transformation LLon Rd{{\mathbb{R}}}^{d}and an integer k∈{1,…,d−1}k\in \left\{1,\ldots ,d-1\right\}such that P∘LP\circ Lis a function of (X1,…,Xk)\left({X}_{1},\ldots ,{X}_{k})and Q∘LQ\circ Lis a function of (Xk+1,…,Xd)\left({X}_{k+1},\ldots ,{X}_{d}); see, e.g., [7,14] and the references therein.Inequality (1) is well known to be true when m=1m=1; see, e.g., Frenkel [5]. Karlin and Rinott [8] also showed that it holds when the random vector ∣X∣=(∣X1∣,…,∣Xd∣)| {\boldsymbol{X}}| =\left(| {X}_{1}| ,\ldots ,| {X}_{d}| )has a multivariate totally positive density of order 2, denoted MTP2{\text{MTP}}_{2}. As stated in Remark 1.4 of their paper, the latter condition is verified, among others, in dimension d=2d=2for all nonsingular Gaussian random pairs.Interest in the problem has recently gained traction when Lan et al. [11] established the inequality in dimension d=3d=3. Hope that the result might be true in general is also fueled by the fact, established by Wei [25], that for any reals α1,…,αd∈(−1/2,0){\alpha }_{1},\ldots ,{\alpha }_{d}\in \left(-1\hspace{0.1em}\text{/}\hspace{0.1em}2,0), one has (2)E∏i=1d∣Xi∣2αi≥∏i=1dE(∣Xi∣2αi).{E}\left(\mathop{\prod }\limits_{i=1}^{d}| {X}_{i}{| }^{2{\alpha }_{i}}\right)\ge \mathop{\prod }\limits_{i=1}^{d}{E}(| {X}_{i}{| }^{2{\alpha }_{i}}).Li and Wei [13] have further conjectured that the latter inequality holds for all reals α1,…,αd∈[0,∞){\alpha }_{1},\ldots ,{\alpha }_{d}\in {[}0,\infty )and any centered Gaussian random vector X{\boldsymbol{X}}.The purpose of this paper is to report a combinatorial proof of inequality (2) in the special case where the reals α1,…,αd{\alpha }_{1},\ldots ,{\alpha }_{d}are nonnegative integers and when each of the components X1,…,Xd{X}_{1},\ldots ,{X}_{d}of the centered Gaussian random vector X{\boldsymbol{X}}can be written as a linear combination, with coefficients of identical sign, of the components of a standard Gaussian random vector. A precise statement of this assumption is given as Condition (III) in Section 2, and the proof of the main result, Proposition 2, appears in Section 3. It is then shown in Section 4, see Proposition 3, that this condition is strictly weaker than the assumption that the random vector ∣X∣| {\boldsymbol{X}}| is MTP2{\text{MTP}}_{2}.Coincidentally, shortly after the first version of the present paper was posted on arXiv, inequality (2) for all nonnegative integers α1,…,αd∈N0={0,1,…}{\alpha }_{1},\ldots ,{\alpha }_{d}\in {{\mathbb{N}}}_{0}=\left\{0,1,\ldots \right\}was established under an even weaker assumption, stated as Condition (IV) in Section 2. The latter condition states that up to a change of sign, the components of the Gaussian random vector X{\boldsymbol{X}}are all nonnegatively correlated; see Lemma 2.3 of Russell and Sun [22]. Therefore, the present paper’s main contribution resides in the method of proof using a combinatorial argument closely related to the complete monotonicity of multinomial probabilities previously shown by Ouimet [16] and Qi et al. [17].All background materials required to understand the contribution and put it in perspective are provided in Section 2. The statements and proofs of the paper’s results are then presented in Sections 3 and 4. This article concludes with a brief discussion in Section 5. For completeness, a technical lemma due to Ouimet [16], which is used in the proof of Proposition 2, is included in the Appendix.2BackgroundFirst, recall the definition of multivariate total positivity of order 2 (MTP2{\text{MTP}}_{2}) on a set S⊆Rd{\mathcal{S}}\subseteq {{\mathbb{R}}}^{d}.Definition 1A density f:Rd→[0,∞)f:{{\mathbb{R}}}^{d}\to {[}0,\infty )supported on S{\mathcal{S}}is said to be multivariate totally positive of order 2, denoted MTP2{\text{MTP}}_{2}, if and only if, for all vectors x=(x1,…,xd),y=(y1,…,yd)∈S{\boldsymbol{x}}=\left({x}_{1},\ldots ,{x}_{d}),{\boldsymbol{y}}=({y}_{1},\ldots ,{y}_{d})\in {\mathcal{S}}, one has f(x∨y)f(x∧y)≥f(x)f(y),f\left({\boldsymbol{x}}\vee {\boldsymbol{y}})f\left({\boldsymbol{x}}\wedge {\boldsymbol{y}})\ge f\left({\boldsymbol{x}})f({\boldsymbol{y}}),where x∨y=(max(x1,y1),…,max(xd,yd)){\boldsymbol{x}}\vee {\boldsymbol{y}}=\left(\max \left({x}_{1},{y}_{1}),\ldots ,\max \left({x}_{d},{y}_{d}))and x∧y=(min(x1,y1),…,min(xd,yd)){\boldsymbol{x}}\wedge {\boldsymbol{y}}=\left(\min \left({x}_{1},{y}_{1}),\ldots ,\min \left({x}_{d},{y}_{d})).Densities in this class have many interesting properties, including the following result, which corresponds to equation (1.7) of Karlin and Rinott [8].Proposition 1Let Y{\boldsymbol{Y}}be an MTP2{\text{MTP}}_{2}random vector on S{\mathcal{S}}, and let φ1,…,φr{\varphi }_{1},\ldots ,{\varphi }_{r}be a collection of nonnegative and (component-wise) nondecreasing functions on S{\mathcal{S}}. ThenE∏i=1rφi(Y)≥∏i=1rE{φi(Y)}.{E}\left\{\mathop{\prod }\limits_{i=1}^{r}{\varphi }_{i}\left({\boldsymbol{Y}})\right\}\ge \mathop{\prod }\limits_{i=1}^{r}{E}\{{\varphi }_{i}\left({\boldsymbol{Y}})\}.In particular, let X=(X1,…,Xd){\boldsymbol{X}}=\left({X}_{1},\ldots ,{X}_{d})be a dd-variate Gaussian random vector with zero mean and nonsingular covariance matrix var(X){\rm{var}}\left({\boldsymbol{X}}). Suppose that the following condition holds.(I)The random vector ∣X∣=(∣X1∣,…,∣Xd∣)| {\boldsymbol{X}}| =\left(| {X}_{1}| ,\ldots ,| {X}_{d}| )belongs to the MTP2{\text{MTP}}_{2}class on [0,∞)d{{[}0,\infty )}^{d}.Under Condition (I), the validity of the GPI conjecture (2) for all reals α1,…,αd∈[0,∞){\alpha }_{1},\ldots ,{\alpha }_{d}\in {[}0,\infty )follows from Proposition 1 with r=dr=dand maps φ1,…,φd{\varphi }_{1},\ldots ,{\varphi }_{d}defined, for every vector y=(y1,…,yd)∈[0,∞)d{\boldsymbol{y}}=({y}_{1},\ldots ,{y}_{d})\in {{[}0,\infty )}^{d}and integer i∈{1,…,d}i\in \left\{1,\ldots ,d\right\}, by φi(y)=yi2αi.{\varphi }_{i}({\boldsymbol{y}})={y}_{i}^{2{\alpha }_{i}}.When X=(X1,…,Xd){\boldsymbol{X}}=\left({X}_{1},\ldots ,{X}_{d})is a centered Gaussian random vector with covariance matrix var(X){\rm{var}}\left({\boldsymbol{X}}), Theorem 3.1 of Karlin and Rinott [8] finds an equivalence between Condition (I) and the requirement that the off-diagonal elements of the inverse of var(X){\rm{var}}\left({\boldsymbol{X}})are all nonpositive up to a change of sign for some of the components of X{\boldsymbol{X}}. The latter condition can be stated more precisely as follows using the notion of signature matrix, which refers to a diagonal matrix whose diagonal elements are ±1\pm 1.(II)There exists a d×dd\times dsignature matrix DDsuch that the covariance matrix var(DX)−1{\rm{var}}{\left(D{\boldsymbol{X}})}^{-1}only has nonpositive off-diagonal elements.Two other conditions of interest on the structure of the random vector X{\boldsymbol{X}}are as follows. (III)There exist a d×dd\times dsignature matrix DDand a d×dd\times dmatrix CCwith entries in [0,∞){[}0,\infty )such that the random vector DXD{\boldsymbol{X}}has the same distribution as the random vector CZC{\boldsymbol{Z}}, where Z∼Nd(0d,Id){\boldsymbol{Z}}\hspace{0.33em} \sim \hspace{0.33em}{{\mathcal{N}}}_{d}\left({{\bf{0}}}_{d},{{\boldsymbol{I}}}_{d})is a d×1d\times 1Gaussian random vector with zero mean vector 0d{{\bf{0}}}_{d}and identity covariance matrix Id{{\boldsymbol{I}}}_{d}.(IV)There exists a d×dd\times dsignature matrix DDsuch that the covariance matrix var(DX){\rm{var}}\left(D{\boldsymbol{X}})has only nonnegative elements.Recently, Russell and Sun [22] used Condition (IV) to show that, for all integers d∈Nd\in {\mathbb{N}}, n1,…,nd∈N0{n}_{1},\ldots ,{n}_{d}\in {{\mathbb{N}}}_{0}and k∈{1,…,d−1}k\in \left\{1,\ldots ,d-1\right\}, and up to a change of sign for some of the components of X{\boldsymbol{X}}, one has (3)E∏i=1dXi2ni≥E∏i=1kXi2niE∏i=k+1dXi2ni.{E}\left(\mathop{\prod }\limits_{i=1}^{d}{X}_{i}^{2{n}_{i}}\right)\ge {E}\left(\mathop{\prod }\limits_{i=1}^{k}{X}_{i}^{2{n}_{i}}\right){E}\left(\mathop{\prod }\limits_{i=k+1}^{d}{X}_{i}^{2{n}_{i}}\right).This result was further extended by Edelmann et al. [4] to the case where the random vector (X12,…,Xd2)\left({X}_{1}^{2},\ldots ,{X}_{d}^{2})has a multivariate gamma distribution in the sense of Krishnamoorthy and Parthasarathy [10]. See also [3] for a use of Condition (IV) in the context of the Gaussian correlation inequality (GCI) conjecture.In the following section, it will be shown how Condition (III) can be exploited to give a combinatorial proof of a weak form of inequality (3). It will then be seen in Section 4 that Condition (II) implies Condition (III), thereby proving the implications illustrated in Figure 1 between Conditions (I)–(IV). That the implications Condition (II) ⇒\Rightarrow Condition (III) and Condition (III) ⇒\Rightarrow Condition (IV) are strict can be checked using, respectively, the covariance matrices var(X)=3/29/89/89/821/163/49/83/421/16=11/21/21/211/41/21/4111/21/21/211/41/21/41{\rm{var}}\left({\boldsymbol{X}})=\left(\begin{array}{ccc}3\hspace{0.1em}\text{/}\hspace{0.1em}2& 9\hspace{0.1em}\text{/}\hspace{0.1em}8& 9\hspace{0.1em}\text{/}\hspace{0.1em}8\\ 9\hspace{0.1em}\text{/}\hspace{0.1em}8& 21\hspace{0.1em}\text{/}\hspace{0.1em}16& 3\hspace{0.1em}\text{/}\hspace{0.1em}4\\ 9\hspace{0.1em}\text{/}\hspace{0.1em}8& 3\hspace{0.1em}\text{/}\hspace{0.1em}4& 21\hspace{0.1em}\text{/}\hspace{0.1em}16\end{array}\right)=\left(\begin{array}{ccc}1& 1\hspace{0.1em}\text{/}\hspace{0.1em}2& 1\hspace{0.1em}\text{/}\hspace{0.1em}2\\ 1\hspace{0.1em}\text{/}\hspace{0.1em}2& 1& 1\hspace{0.1em}\text{/}\hspace{0.1em}4\\ 1\hspace{0.1em}\text{/}\hspace{0.1em}2& 1\hspace{0.1em}\text{/}\hspace{0.1em}4& 1\end{array}\right)\left(\begin{array}{ccc}1& 1\hspace{0.1em}\text{/}\hspace{0.1em}2& 1\hspace{0.1em}\text{/}\hspace{0.1em}2\\ 1\hspace{0.1em}\text{/}\hspace{0.1em}2& 1& 1\hspace{0.1em}\text{/}\hspace{0.1em}4\\ 1\hspace{0.1em}\text{/}\hspace{0.1em}2& 1\hspace{0.1em}\text{/}\hspace{0.1em}4& 1\end{array}\right)and var(X)=1001/21/2013/401/203/411/201/201/2101/21/2001+ε1000001000001000001000001,{\rm{var}}\left({\boldsymbol{X}})=\left(\begin{array}{ccccc}1& 0& 0& 1\hspace{0.1em}\text{/}\hspace{0.1em}2& 1\hspace{0.1em}\text{/}\hspace{0.1em}2\\ 0& 1& 3\hspace{0.1em}\text{/}\hspace{0.1em}4& 0& 1\hspace{0.1em}\text{/}\hspace{0.1em}2\\ 0& 3\hspace{0.1em}\text{/}\hspace{0.1em}4& 1& 1\hspace{0.1em}\text{/}\hspace{0.1em}2& 0\\ 1\hspace{0.1em}\text{/}\hspace{0.1em}2& 0& 1\hspace{0.1em}\text{/}\hspace{0.1em}2& 1& 0\\ 1\hspace{0.1em}\text{/}\hspace{0.1em}2& 1\hspace{0.1em}\text{/}\hspace{0.1em}2& 0& 0& 1\end{array}\right)+\varepsilon \hspace{0.33em}\left(\begin{array}{ccccc}1& 0& 0& 0& 0\\ 0& 1& 0& 0& 0\\ 0& 0& 1& 0& 0\\ 0& 0& 0& 1& 0\\ 0& 0& 0& 0& 1\end{array}\right),for some appropriate ε∈(0,∞)\varepsilon \in \left(0,\infty ).Figure 1Implications between Conditions (I)–(IV) for a nonsingular centered Gaussian random vector X{\boldsymbol{X}}, with references.In the first example, the matrix var(X){\rm{var}}\left({\boldsymbol{X}})is completely positive (meaning that it can be written as CC⊤C{C}^{\top }for some matrix CCwith nonnegative entries) and positive definite by construction. Furthermore, the matrix Dvar(X)−1DD\hspace{0.33em}{\rm{var}}{\left({\boldsymbol{X}})}^{-1}Dhas at least one positive off-diagonal element for any of the eight possible choices of 3×33\times 3signature matrix DD. Another way to see this is to observe that if A=var(X)−1A={\rm{var}}{\left({\boldsymbol{X}})}^{-1}, then the cyclic product a12a23a31{a}_{12}{a}_{23}{a}_{31}, which is invariant to A↦DADA\mapsto DAD, is strictly positive in the aforementioned example, so that the off-diagonal elements of var(X)−1{\rm{var}}{\left({\boldsymbol{X}})}^{-1}cannot all be nonpositive. This shows that (III) ⇏\Rightarrow &#x0338;(II). This example was adapted from ideas communicated to the authors by Thomas Royen.For the second example, when ε=0\varepsilon =0, it is mentioned by Maxfield and Minc [15], using a result from Hall [6], that the matrix is positive semidefinite and has only nonnegative elements but that it is not completely positive. Using the fact that the set of 5×55\times 5completely positive matrices is closed, there exists ε∈(0,∞)\varepsilon \in \left(0,\infty )small enough that the matrix var(X){\rm{var}}\left({\boldsymbol{X}})is positive definite and has only nonnegative elements but is not completely positive. More generally, given that the elements of var(X){\rm{var}}\left({\boldsymbol{X}})are all nonnegative, the matrix Dvar(X)DD\hspace{0.33em}{\rm{var}}\left({\boldsymbol{X}})Dis not completely positive for any of the 32 possible choices of 5×55\times 5signature matrix DD, which shows that (IV) ⇏\Rightarrow &#x0338;(III). This idea was adapted from comments by Stein [24].3A combinatorial proof of the GPI conjectureThe following result, which is this paper’s main result, shows that the extended GPI conjecture of Li and Wei [13] given in (2) holds true under Condition (III) when the reals α1,…,αd{\alpha }_{1},\ldots ,{\alpha }_{d}are nonnegative integers. This result also follows from inequality (3), due to Russell and Sun [22], but the argument below is completely different from the latter authors’ derivation based on Condition (IV).Proposition 2Let X=(X1,…,Xd){\boldsymbol{X}}=\left({X}_{1},\ldots ,{X}_{d})be a d-variate centered Gaussian random vector. Assume that there exist a d×dd\times dsignature matrix DDand a d×dd\times dmatrix C with entries in [0,∞){[}0,\infty )such that the random vector DXD{\boldsymbol{X}}has the same distribution as the random vector CZC{\boldsymbol{Z}}, where Z∼Nd(0d,Id){\boldsymbol{Z}}\hspace{0.33em} \sim \hspace{0.33em}{{\mathcal{N}}}_{d}\left({{\bf{0}}}_{d},{{\boldsymbol{I}}}_{d})is a dd-dimensional standard Gaussian random vector. Then, for all integers n1,…,nd∈N0{n}_{1},\ldots ,{n}_{d}\in {{\mathbb{N}}}_{0}, E∏i=1dXi2ni≥∏i=1dE(Xi2ni).{E}\left(\mathop{\prod }\limits_{i=1}^{d}{X}_{i}^{2{n}_{i}}\right)\ge \mathop{\prod }\limits_{i=1}^{d}{E}({X}_{i}^{2{n}_{i}}).ProofIn terms of Z{\boldsymbol{Z}}, the claimed inequality is equivalent to (4)E∏i=1d∑j=1dcijZj2ni≥∏i=1dE∑j=1dcijZj2ni.{E}\left\{\mathop{\prod }\limits_{i=1}^{d}{\left(\mathop{\sum }\limits_{j=1}^{d}{c}_{ij}{Z}_{j}\right)}^{2{n}_{i}}\right\}\ge \mathop{\prod }\limits_{i=1}^{d}{E}\left\{{\left(\mathop{\sum }\limits_{j=1}^{d}{c}_{ij}{Z}_{j}\right)}^{2{n}_{i}}\right\}.For each integer j∈{1,…,d}j\in \left\{1,\ldots ,d\right\}, set Kj=k1j+⋯+kdj{K}_{j}={k}_{1j}+\cdots +{k}_{dj}and Lj=ℓ1j+⋯+ℓdj{L}_{j}={\ell }_{1j}+\cdots +{\ell }_{dj}, where kij{k}_{ij}and ℓij{\ell }_{ij}are nonnegative integer-valued indices to be used in expressions (5) and (6).By the multinomial formula, the left-hand side of inequality (4) can be expanded as follows: E∏i=1d∑ki∈N0d:ki1+⋯+kid=2ni2niki1,…,kid∏j=1dcijkijZjkij.{E}\left\{\mathop{\prod }\limits_{i=1}^{d}\sum _{\begin{array}{c}{{\boldsymbol{k}}}_{i}\in {{\mathbb{N}}}_{0}^{d}:\\ {k}_{i1}+\cdots +{k}_{id}=2{n}_{i}\end{array}}\left(\genfrac{}{}{0.0pt}{}{2{n}_{i}}{{k}_{i1},\ldots ,{k}_{id}}\right)\mathop{\prod }\limits_{j=1}^{d}{c}_{ij}^{{k}_{ij}}{Z}_{j}^{{k}_{ij}}\right\}.Calling on the linearity of expectations and the mutual independence of the components of the random vector Z{\boldsymbol{Z}}, one can then rewrite this expression as follows: E∑k1∈N0d:k11+⋯+k1d=2n1…∑kd∈N0d:kd1+⋯+kdd=2nd∏i=1d2niki1,…,kid∏j=1dcijkijZjkij=∑k1∈N0d:k11+⋯+k1d=2n1…∑kd∈N0d:kd1+⋯+kdd=2nd∏j=1dE(ZjKj)∏i=1d2niki1,…,kid∏j=1dcijkij.\begin{array}{l}{E}\left\{\displaystyle \sum _{\begin{array}{c}{{\boldsymbol{k}}}_{1}\in {{\mathbb{N}}}_{0}^{d}:\\ {k}_{11}+\cdots +{k}_{1d}=2{n}_{1}\end{array}}\ldots \displaystyle \sum _{\begin{array}{c}{{\boldsymbol{k}}}_{d}\in {{\mathbb{N}}}_{0}^{d}:\\ {k}_{d1}+\cdots +{k}_{dd}=2{n}_{d}\end{array}}\mathop{\displaystyle \prod }\limits_{i=1}^{d}\left(\genfrac{}{}{0.0pt}{}{2{n}_{i}}{{k}_{i1},\ldots ,{k}_{id}}\right)\mathop{\displaystyle \prod }\limits_{j=1}^{d}{c}_{ij}^{{k}_{ij}}{Z}_{j}^{{k}_{ij}}\right\}\\ \hspace{1.0em}=\displaystyle \sum _{\begin{array}{c}{{\boldsymbol{k}}}_{1}\in {{\mathbb{N}}}_{0}^{d}:\\ {k}_{11}+\cdots +{k}_{1d}=2{n}_{1}\end{array}}\ldots \displaystyle \sum _{\begin{array}{c}{{\boldsymbol{k}}}_{d}\in {{\mathbb{N}}}_{0}^{d}:\\ {k}_{d1}+\cdots +{k}_{dd}=2{n}_{d}\end{array}}\left\{\mathop{\displaystyle \prod }\limits_{j=1}^{d}{E}({Z}_{j}^{{K}_{j}})\right\}\mathop{\displaystyle \prod }\limits_{i=1}^{d}\left(\genfrac{}{}{0.0pt}{}{2{n}_{i}}{{k}_{i1},\ldots ,{k}_{id}}\right)\mathop{\displaystyle \prod }\limits_{j=1}^{d}{c}_{ij}^{{k}_{ij}}.\end{array}Given that the coefficients cij{c}_{ij}are all nonnegative by assumption, and exploiting the fact that, for every integer j∈{1,…,d}j\in \left\{1,\ldots ,d\right\}and m∈N0m\in {{\mathbb{N}}}_{0}, E(Zj2m)=(2m)!2mm!,{E}({Z}_{j}^{2m})=\frac{\left(2m)\hspace{0.1em}\text{&#x0021;}}{{2}^{m}m\text{&#x0021;}\hspace{0.1em}},one can bound the left-hand side of inequality (4) from below by (5)∑ℓ1∈N0d:2ℓ11+⋯+2ℓ1d=2n1…∑ℓd∈N0d:2ℓd1+⋯+2ℓdd=2nd∏j=1dE(Zj2Lj)∏i=1d2ni2ℓi1,…,2ℓid∏j=1dcij2ℓij=∑ℓ1∈N0d:ℓ11+⋯+ℓ1d=n1…∑ℓd∈N0d:ℓd1+⋯+ℓdd=nd∏j=1d(2Lj)!2LjLj!∏i=1d2ni2ℓi1,…,2ℓid∏j=1dcij2ℓij.\begin{array}{l}\displaystyle \sum _{\begin{array}{c}{{\boldsymbol{\ell }}}_{1}\in {{\mathbb{N}}}_{0}^{d}:\\ 2{\ell }_{11}+\cdots +2{\ell }_{1d}=2{n}_{1}\end{array}}\ldots \displaystyle \sum _{\begin{array}{c}{{\boldsymbol{\ell }}}_{d}\in {{\mathbb{N}}}_{0}^{d}:\\ 2{\ell }_{d1}+\cdots +2{\ell }_{dd}=2{n}_{d}\end{array}}\left\{\mathop{\displaystyle \prod }\limits_{j=1}^{d}{E}({Z}_{j}^{2{L}_{j}})\right\}\mathop{\displaystyle \prod }\limits_{i=1}^{d}\left(\genfrac{}{}{0.0pt}{}{2{n}_{i}}{2{\ell }_{i1},\ldots ,2{\ell }_{id}}\right)\mathop{\displaystyle \prod }\limits_{j=1}^{d}{c}_{ij}^{2{\ell }_{ij}}\\ \hspace{1.0em}=\displaystyle \sum _{\begin{array}{c}{{\boldsymbol{\ell }}}_{1}\in {{\mathbb{N}}}_{0}^{d}:\\ {\ell }_{11}+\cdots +{\ell }_{1d}={n}_{1}\end{array}}\ldots \displaystyle \sum _{\begin{array}{c}{{\boldsymbol{\ell }}}_{d}\in {{\mathbb{N}}}_{0}^{d}:\\ {\ell }_{d1}+\cdots +{\ell }_{dd}={n}_{d}\end{array}}\left\{\mathop{\displaystyle \prod }\limits_{j=1}^{d}\frac{\left(2{L}_{j})\hspace{0.1em}\text{&#x0021;}}{{2}^{{L}_{j}}{L}_{j}\text{&#x0021;}\hspace{0.1em}}\right\}\mathop{\displaystyle \prod }\limits_{i=1}^{d}\left(\genfrac{}{}{0.0pt}{}{2{n}_{i}}{2{\ell }_{i1},\ldots ,2{\ell }_{id}}\right)\mathop{\displaystyle \prod }\limits_{j=1}^{d}{c}_{ij}^{2{\ell }_{ij}}.\end{array}The right-hand side of (4) can be expanded in a similar way. Upon using the fact that E(Y2m)=(2m)!σ2m/(2mm!){E}\left({Y}^{2m})=\left(2m)\hspace{0.1em}\text{&#x0021;}{\sigma }^{2m}\text{/}\hspace{0.1em}\left({2}^{m}m\hspace{0.1em}\text{&#x0021;}\hspace{0.1em})for every integer m∈N0m\in {{\mathbb{N}}}_{0}when Y∼N(0,σ2)Y \sim {\mathcal{N}}\left(0,{\sigma }^{2}), one finds (6)∏i=1dE∑j=1dcijZj2ni=∏i=1d(2ni)!2nini!∑j=1dcij2ni=∏i=1d(2ni)!2nini!∑ℓi∈N0d:ℓi1+⋯+ℓid=niniℓi1,…,ℓid∏j=1dcij2ℓij=∑ℓ1∈N0d:ℓ11+⋯+ℓ1d=n1…∑ℓd∈N0d:ℓd1+⋯+ℓdd=nd∏i=1d(2ni)!2nini!niℓi1,…,ℓid∏j=1dcij2ℓij.\begin{array}{l}\mathop{\displaystyle \prod }\limits_{i=1}^{d}{E}\left\{{\left(\mathop{\displaystyle \sum }\limits_{j=1}^{d}{c}_{ij}{Z}_{j}\right)}^{2{n}_{i}}\right\}=\mathop{\displaystyle \prod }\limits_{i=1}^{d}\frac{\left(2{n}_{i})\hspace{0.1em}\text{&#x0021;}}{{2}^{{n}_{i}}{n}_{i}\text{&#x0021;}\hspace{0.1em}}{\left(\mathop{\displaystyle \sum }\limits_{j=1}^{d}{c}_{ij}^{2}\right)}^{{n}_{i}}=\mathop{\displaystyle \prod }\limits_{i=1}^{d}\frac{\left(2{n}_{i})\hspace{0.1em}\text{&#x0021;}}{{2}^{{n}_{i}}{n}_{i}\text{&#x0021;}\hspace{0.1em}}\displaystyle \sum _{\begin{array}{c}{{\boldsymbol{\ell }}}_{i}\in {{\mathbb{N}}}_{0}^{d}:\\ {\ell }_{i1}+\cdots +{\ell }_{id}={n}_{i}\end{array}}\left(\genfrac{}{}{0.0pt}{}{{n}_{i}}{{\ell }_{i1},\ldots ,{\ell }_{id}}\right)\mathop{\displaystyle \prod }\limits_{j=1}^{d}{c}_{ij}^{2{\ell }_{ij}}\\ \hspace{1.0em}=\displaystyle \sum _{\begin{array}{c}{{\boldsymbol{\ell }}}_{1}\in {{\mathbb{N}}}_{0}^{d}:\\ {\ell }_{11}+\cdots +{\ell }_{1d}={n}_{1}\end{array}}\ldots \displaystyle \sum _{\begin{array}{c}{{\boldsymbol{\ell }}}_{d}\in {{\mathbb{N}}}_{0}^{d}:\\ {\ell }_{d1}+\cdots +{\ell }_{dd}={n}_{d}\end{array}}\mathop{\displaystyle \prod }\limits_{i=1}^{d}\frac{\left(2{n}_{i})\hspace{0.1em}\text{&#x0021;}}{{2}^{{n}_{i}}{n}_{i}\text{&#x0021;}\hspace{0.1em}}\left(\genfrac{}{}{0.0pt}{}{{n}_{i}}{{\ell }_{i1},\ldots ,{\ell }_{id}}\right)\mathop{\displaystyle \prod }\limits_{j=1}^{d}{c}_{ij}^{2{\ell }_{ij}}.\end{array}Next, compare the coefficients of the corresponding powers cij2ℓij{c}_{ij}^{2{\ell }_{ij}}in expressions (5) and (6). To prove inequality (4), it suffices to show that, for all integer-valued vectors ℓ1,…,ℓd∈N0d{{\boldsymbol{\ell }}}_{1},\ldots ,{{\boldsymbol{\ell }}}_{d}\in {{\mathbb{N}}}_{0}^{d}satisfying ℓi1+⋯+ℓid=ni{\ell }_{i1}+\cdots +{\ell }_{id}={n}_{i}for every integer i∈{1,…,d}i\in \left\{1,\ldots ,d\right\}, one has ∏j=1d(2Lj)!2LjLj!∏i=1d2ni2ℓi1,…,2ℓid≥∏i=1d(2ni)!2nini!niℓi1,…,ℓid.\left\{\mathop{\prod }\limits_{j=1}^{d}\frac{\left(2{L}_{j})\hspace{0.1em}\text{&#x0021;}}{{2}^{{L}_{j}}{L}_{j}\text{&#x0021;}\hspace{0.1em}}\right\}\mathop{\prod }\limits_{i=1}^{d}\left(\genfrac{}{}{0.0pt}{}{2{n}_{i}}{2{\ell }_{i1},\ldots ,2{\ell }_{id}}\right)\ge \mathop{\prod }\limits_{i=1}^{d}\frac{\left(2{n}_{i})\hspace{0.1em}\text{&#x0021;}}{{2}^{{n}_{i}}{n}_{i}\text{&#x0021;}\hspace{0.1em}}\left(\genfrac{}{}{0.0pt}{}{{n}_{i}}{{\ell }_{i1},\ldots ,{\ell }_{id}}\right).Taking into account the fact that 2L1+⋯+Ld=2n1+⋯+nd{2}^{{L}_{1}+\cdots +{L}_{d}}={2}^{{n}_{1}+\cdots +{n}_{d}}, and after cancelling some factorials, one finds that the aforementioned inequality reduces to (7)∏j=1d(2Lj)!∏i=1d(2ℓij)!≥∏j=1dLj!∏i=1dℓij!.\mathop{\prod }\limits_{j=1}^{d}\frac{\left(2{L}_{j})\hspace{0.1em}\text{&#x0021;}\hspace{0.1em}}{\mathop{\prod }\limits_{i=1}^{d}\left(2{\ell }_{ij})\hspace{0.1em}\text{&#x0021;}}\ge \mathop{\prod }\limits_{j=1}^{d}\frac{{L}_{j}\text{&#x0021;}}{{\prod }_{i=1}^{d}{\ell }_{ij}\text{&#x0021;}\hspace{0.1em}}.Therefore, the proof is complete if one can establish inequality (7). To this end, one can assume without loss of generality that the integers L1,…,Ld{L}_{1},\ldots ,{L}_{d}are all nonzero; otherwise, inequality (7) reduces to a lower dimensional case. For any given integers L1,…,Ld∈N{L}_{1},\ldots ,{L}_{d}\in {\mathbb{N}}and every integer j∈{1,…,d}j\in \left\{1,\ldots ,d\right\}, define the function a↦gj(a)=Γ(aLj+1)∏i=1dΓ(aℓij+1),a\mapsto {g}_{j}\left(a)=\frac{\Gamma \left(a{L}_{j}+1)}{{\prod }_{i=1}^{d}\Gamma \left(a{\ell }_{ij}+1)},on the interval (−1/Lj,∞)\left(-1\hspace{0.1em}\text{/}\hspace{0.1em}{L}_{j},\infty ), where Γ\Gamma denotes Euler’s gamma function.To prove inequality (7), it thus suffices to show that, for every integer j∈{1,…,d}j\in \left\{1,\ldots ,d\right\}, the map a↦ln{gj(a)}a\mapsto \mathrm{ln}\left\{{g}_{j}\left(a)\right\}is nondecreasing on the interval [0,∞){[}0,\infty ). Direct computations yield, for every real a∈[0,∞)a\in {[}0,\infty ), ddaln{gj(a)}=Ljψ(aLj+1)−∑i=1dℓijψ(aℓij+1),d2da2ln{gj(a)}=Lj2ψ′(aLj+1)−∑i=1dℓij2ψ′(aℓij+1),\begin{array}{rcl}\frac{{\rm{d}}}{{\rm{d}}a}\mathrm{ln}\left\{{g}_{j}\left(a)\right\}& =& {L}_{j}\psi \left(a{L}_{j}+1)-\mathop{\displaystyle \sum }\limits_{i=1}^{d}{\ell }_{ij}\psi \left(a{\ell }_{ij}+1),\\ \frac{{{\rm{d}}}^{2}}{{\rm{d}}{a}^{2}}\mathrm{ln}\left\{{g}_{j}\left(a)\right\}& =& {L}_{j}^{2}\psi ^{\prime} \left(a{L}_{j}+1)-\mathop{\displaystyle \sum }\limits_{i=1}^{d}{\ell }_{ij}^{2}\psi ^{\prime} \left(a{\ell }_{ij}+1),\end{array}where ψ=(lnΓ)′\psi =\left(\mathrm{ln}\Gamma )^{\prime} denotes the digamma function. Now call on the integral representation [1, p. 260]ψ′(z)=∫0∞te−(z−1)tet−1dt,\psi ^{\prime} \left(z)=\underset{0}{\overset{\infty }{\int }}\frac{t{{\rm{e}}}^{-\left(z-1)t}}{{{\rm{e}}}^{t}-1}{\rm{d}}t,valid for every real z∈(0,∞)z\in \left(0,\infty ), to write (8)d2da2ln{gj(a)}=∫0∞(Ljt)e−a(Ljt)et−1Ljdt−∑i=1d∫0∞(ℓijt)e−a(ℓijt)et−1ℓijdt=∫0∞se−as1es/Lj−1−∑i=1d1(es/Lj)Lj/ℓij−1ds.\frac{{{\rm{d}}}^{2}}{{\rm{d}}{a}^{2}}\mathrm{ln}\left\{{g}_{j}\left(a)\right\}=\underset{0}{\overset{\infty }{\int }}\frac{\left({L}_{j}t){{\rm{e}}}^{-a\left({L}_{j}t)}}{{{\rm{e}}}^{t}-1}\hspace{0.33em}{L}_{j}{\rm{d}}t-\mathop{\sum }\limits_{i=1}^{d}\underset{0}{\overset{\infty }{\int }}\frac{\left({\ell }_{ij}t){{\rm{e}}}^{-a\left({\ell }_{ij}t)}}{{{\rm{e}}}^{t}-1}\hspace{0.33em}{\ell }_{ij}{\rm{d}}t=\underset{0}{\overset{\infty }{\int }}s{{\rm{e}}}^{-as}\left\{\frac{1}{{{\rm{e}}}^{s\text{/}{L}_{j}}-1}-\mathop{\sum }\limits_{i=1}^{d}\frac{1}{{\left({{\rm{e}}}^{s\text{/}{L}_{j}})}^{{L}_{j}\text{/}{\ell }_{ij}}-1}\right\}{\rm{d}}s.Given that (ℓ1j+⋯+ℓdj)/Lj=1\left({\ell }_{1j}+\cdots +{\ell }_{dj})\hspace{0.1em}\text{/}\hspace{0.1em}{L}_{j}=1by construction, the quantity within braces in equation (8) is always nonnegative by Lemma 1.4 of Ouimet [16]; this can be checked upon setting y=es/Ljy={{\rm{e}}}^{s\text{/}{L}_{j}}and ui=ℓij/Lj{u}_{i}={\ell }_{ij}\hspace{0.1em}\text{/}\hspace{0.1em}{L}_{j}for every integer i∈{1,…,d}i\in \left\{1,\ldots ,d\right\}in that paper’s notation. Alternatively, see p. 516 in the study by Qi et al. [17]. Therefore, (9)∀a∈[0,∞)d2da2ln{gj(a)}≥0.{\forall }_{a\in {[}0,\infty )}\hspace{1.0em}\frac{{{\rm{d}}}^{2}}{{\rm{d}}{a}^{2}}\mathrm{ln}\left\{{g}_{j}\left(a)\right\}\ge 0.In fact, the map a↦d2ln{gj(a)}/da2a\mapsto {d}^{2}\mathrm{ln}\left\{{g}_{j}\left(a)\right\}\hspace{0.1em}\text{/}\hspace{0.1em}{\rm{d}}{a}^{2}is even completely monotonic. Moreover, given that ddaln{gj(a)}a=0=Ljψ(1)−∑i=1dℓijψ(1)=0×ψ(1)=0,{\left.\frac{{\rm{d}}}{{\rm{d}}a}\mathrm{ln}\left\{{g}_{j}\left(a)\right\}\right|}_{a=0}={L}_{j}\psi \left(1)-\mathop{\sum }\limits_{i=1}^{d}{\ell }_{ij}\psi \left(1)=0\times \psi \left(1)=0,one can deduce from inequality (9) that ∀a∈[0,∞)ddaln{gj(a)}≥0.{\forall }_{a\in {[}0,\infty )}\hspace{1.0em}\frac{{\rm{d}}}{{\rm{d}}a}\mathrm{ln}\left\{{g}_{j}\left(a)\right\}\ge 0.Hence, the map a↦ln{gj(a)}a\mapsto \mathrm{ln}\left\{{g}_{j}\left(a)\right\}is nondecreasing on [0,∞){[}0,\infty ). This concludes the argument.□4Condition (II) implies Condition (III)This paper’s second result, stated below, shows that Condition (II) implies Condition (III). In view of Figure 1, one may then conclude that Condition (I) also implies Conditions (III) and (IV), and hence, also Condition (II) implies Condition (IV). The implication Condition (II) ⇒\Rightarrow Condition (IV) was already established in Theorem 2 (i) of Karlin and Rinott [9], and its strictness was mentioned on top of p. 427 of the same paper.Proposition 3Let Σ\Sigma be a symmetric positive definite matrix with Cholesky decomposition Σ=CC⊤\Sigma =C{C}^{\top }. If the off-diagonal entries of Σ−1{\Sigma }^{-1}are all nonpositive, then the elements of C are all nonnegative.ProofThe proof is by induction on the dimension ddof Σ\Sigma . The claim trivially holds when d=1d=1. Assume that it is verified for some integer n∈Nn\in {\mathbb{N}}, and fix d=n+1d=n+1. Given the assumptions on Σ\Sigma , one can write Σ−1=av⊤vB{\Sigma }^{-1}=\left(\begin{array}{cc}a& {{\boldsymbol{v}}}^{\top }\\ {\boldsymbol{v}}& B\end{array}\right)in terms of a real a∈(0,∞)a\in \left(0,\infty ), an n×1n\times 1vector v{\boldsymbol{v}}with nonpositive components, and an n×nn\times nmatrix BBwith nonpositive off-diagonal entries.Given that Σ\Sigma is symmetric positive definite by assumption, so is Σ−1{\Sigma }^{-1}, and hence so are BBand B−1{B}^{-1}. Moreover, the off-diagonal entries of B=(B−1)−1B={\left({B}^{-1})}^{-1}are nonpositive, and hence by induction, the factor LLin the Cholesky decomposition B−1=LL⊤{B}^{-1}=L{L}^{\top }has nonnegative entries. Letting w=a−v⊤LL⊤vw=a-{{\boldsymbol{v}}}^{\top }L{L}^{\top }{\boldsymbol{v}}denote the Schur complement, which is strictly positive, one has Σ−1=av⊤v(LL⊤)−1=wv⊤L0n(L⊤)−1w0n⊤L⊤vL−1,{\Sigma }^{-1}=\left(\begin{array}{cc}a& {{\boldsymbol{v}}}^{\top }\\ {\boldsymbol{v}}& {\left(L{L}^{\top })}^{-1}\end{array}\right)=\left(\begin{array}{cc}\sqrt{w}& {{\boldsymbol{v}}}^{\top }L\\ {{\bf{0}}}_{n}& {\left({L}^{\top })}^{-1}\end{array}\right)\left(\begin{array}{cc}\sqrt{w}& {{\bf{0}}}_{n}^{\top }\\ {L}^{\top }{\boldsymbol{v}}& {L}^{-1}\end{array}\right),where 0n{{\bf{0}}}_{n}is an n×1n\times 1vector of zeros. Accordingly, Σ=w0n⊤L⊤vL−1−1wv⊤L0n(L⊤)−1−1=1/w0n⊤−LL⊤v/wL1/w0n⊤−LL⊤v/wL⊤=CC⊤.\Sigma ={\left(\begin{array}{cc}\sqrt{w}& {{\bf{0}}}_{n}^{\top }\\ {L}^{\top }{\boldsymbol{v}}& {L}^{-1}\end{array}\right)}^{-1}{\left(\begin{array}{cc}\sqrt{w}& {{\boldsymbol{v}}}^{\top }L\\ {{\bf{0}}}_{n}& {\left({L}^{\top })}^{-1}\end{array}\right)}^{-1}=\left(\begin{array}{cc}1\text{/}\sqrt{w}& {{\bf{0}}}_{n}^{\top }\\ -L{L}^{\top }{\boldsymbol{v}}\text{/}\sqrt{w}& L\end{array}\right){\left(\begin{array}{cc}1\text{/}\sqrt{w}& {{\bf{0}}}_{n}^{\top }\\ -L{L}^{\top }{\boldsymbol{v}}\text{/}\sqrt{w}& L\end{array}\right)}^{\top }=C{C}^{\top }.Recall that wwis strictly positive and all the entries of LLand −v-{\boldsymbol{v}}are nonnegative. Hence, all the elements of CCare nonnegative, and the argument is complete.□5DiscussionThis paper shows that the Gaussian product inequality (2) holds under Condition (III) when the reals α1,…,αd{\alpha }_{1},\ldots ,{\alpha }_{d}are nonnegative integers. This assumption is further seen to be strictly weaker than Condition (II). It thus follows from the implications in Figure 1 that when the reals α1,…,αd{\alpha }_{1},\ldots ,{\alpha }_{d}are nonnegative integers, inequality (2) holds more generally than under the MTP2{\text{MTP}}_{2}condition of Karlin and Rinott [8].Shortly after the first draft of this article was posted on arXiv, extensions of Proposition 2 were announced by Russell and Sun [22] and Edelmann et al. [4]; see Lemma 2.3 and Theorem 2.1, respectively, in their manuscripts. Beyond priority claims, which are nugatory, the originality of the present work lies mainly in its method of proof, and in the clarification it provides of the relationship between various assumptions made in the relevant literature, as summarized by Figure 1.Beyond its intrinsic interest, the approach to the proof of the GPI presented herein, together with its link to the complete monotonicity of multinomial probabilities previously shown by Ouimet [16] and Qi et al. [17], hints to a deep relationship between the MTP2{\text{MTP}}_{2}class for the multivariate gamma distribution of Krishnamoorthy and Parthasarathy [10], the range of admissible parameter values for their infinite divisibility, and the complete monotonicity of their Laplace transform; see the work by Royen on the GCI conjecture [18,19, 20,21] and Theorems 1.2 and 1.3 of Scott and Sokal [23]. These topics, and the proof or refutation of the GPI in its full generality, provide interesting avenues for future research.AppendixTechnical lemmaThe following result, used in the proof of Proposition 2, extends Lemma 1 of Alzer [2] from the case d=1d=1to an arbitrary integer d∈Nd\in {\mathbb{N}}. It was already reported by Ouimet [16], see his Lemma 4.1, but its short statement and proof are included here to make the article more self-contained.Lemma A.1For every integer d∈Nd\in {\mathbb{N}}, and real numbers y∈(1,∞)y\in \left(1,\infty )and u1,…,ud+1∈(0,1){u}_{1},\ldots ,{u}_{d+1}\in \left(0,1)such that u1+⋯+ud+1=1{u}_{1}+\cdots +{u}_{d+1}=1, one has(A.1)1y−1>∑i=1d+11y1/ui−1.\frac{1}{y-1}\gt \mathop{\sum }\limits_{i=1}^{d+1}\frac{1}{{y}^{1\text{/}{u}_{i}}-1}.ProofThe proof is by induction on the integer dd. The case d=1d=1is the statement of Lemma 1 of Alzer [2]. Fix an integer d≥2d\ge 2and assume that inequality (A.1) holds for every smaller integer. Fix any reals y∈(1,∞)y\in \left(1,\infty )and u1,…,ud∈(0,1){u}_{1},\ldots ,{u}_{d}\in \left(0,1)such that ‖u‖1=u1+⋯+ud<1\Vert {\boldsymbol{u}}{\Vert }_{1}={u}_{1}+\cdots +{u}_{d}\lt 1. Write ud+1=1−‖u‖1>0{u}_{d+1}=1-\Vert {\boldsymbol{u}}{\Vert }_{1}\gt 0. Calling on Alzer’s result, one has 1y−1>1y1/‖u‖1−1+1y1/(1−‖u‖1)−1.\frac{1}{y-1}\gt \frac{1}{{y}^{1\text{/}\Vert {\boldsymbol{u}}{\Vert }_{1}}-1}+\frac{1}{{y}^{1\text{/}\left(1-\Vert {\boldsymbol{u}}{\Vert }_{1})}-1}.Therefore, the conclusion follows if one can show that 1y1/‖u‖1−1>∑i=1d1y1/ui−1.\frac{1}{{y}^{1\text{/}\Vert {\boldsymbol{u}}{\Vert }_{1}}-1}\gt \mathop{\sum }\limits_{i=1}^{d}\frac{1}{{y}^{1\text{/}{u}_{i}}-1}.Upon setting z=y1/‖u‖1z={y}^{1\text{/}\Vert {\boldsymbol{u}}{\Vert }_{1}}and vi=ui/‖u‖1{v}_{i}={u}_{i}\hspace{0.1em}\text{/}\hspace{0.1em}\Vert {\boldsymbol{u}}{\Vert }_{1}, one finds that the aforementioned inequality is equivalent to 1z−1>∑i=1d1z1/vi−1,\frac{1}{z-1}\gt \mathop{\sum }\limits_{i=1}^{d}\frac{1}{{z}^{1\text{/}{v}_{i}}-1},which is true by the induction assumption. Therefore, the argument is complete.□

Journal

Dependence Modelingde Gruyter

Published: Jan 1, 2022

Keywords: complete monotonicity; gamma function; Gaussian product inequality; Gaussian random vector; moment inequality; multinomial; multivariate normal; polygamma function; Primary 60E15; Secondary 05A20; 33B15; 62E15; 62H10; 62H12

There are no references for this article.