A structural Markov property for decomposable graph laws that allows control of clique intersections

A structural Markov property for decomposable graph laws that allows control of clique intersections Summary We present a new kind of structural Markov property for probabilistic laws on decomposable graphs, which allows the explicit control of interactions between cliques and so is capable of encoding some interesting structure. We prove the equivalence of this property to an exponential family assumption, and discuss identifiability, modelling, inferential and computational implications. 1. Introduction The conditional independence properties among components of a multivariate distribution are key to understanding its structure, and precisely describe the qualitative manner in which information flows among the variables. These properties are well represented by a graphical model, in which nodes, representing variables in the model, are connected by undirected edges, encoding the conditional independence properties of the distribution (Lauritzen, 1996). Inference about the underlying graph from observed data is therefore an important task, sometimes known as structural learning. Bayesian structural learning requires specification of a prior distribution on graphs, and there is a need for a flexible but tractable family of such priors, capable of representing a variety of prior beliefs about the conditional independence structure. In the interests of tractability and scalability, there has been a strong focus on the case where the true graph may be assumed to be decomposable. Just as this underlying graph localizes the pattern of dependence among variables, it is appealing that the prior on the graph itself should exhibit dependence locally, in the same graphical sense. Informally, the presence or absence of two edges should be independent when they are sufficiently separated by other edges in the graph. The first class of graph priors demonstrating such a structural Markov property was presented in a 2012 Cambridge University PhD thesis by Simon Byrne, and later published in Byrne & Dawid (2015). That priors with this property are also tractable arises from an equivalence demonstrated by Byrne & Dawid (2015), between their structural Markov property for decomposable graphs and the assumption that the graph law follows a clique exponential family. This important result is yet another example of a theme in the literature, making a connection between systems of conditional independence statements among random variables, often encoded graphically, and factorizations of the joint probability distribution of these variables. Examples include the global Markov property for undirected graphs, which is necessary, and under an additional condition sufficient, for the joint distribution to factorize as a product of potentials over cliques; the Markov property for directed acyclic graphs, which is equivalent to the existence of a factorization of the joint distributions into child-given-parents conditional distributions; and the existence of a factorization into clique and separator marginal distributions for undirected decomposable graphs. All of these results are now well known, and for these and other essentials of graphical models, the reader is referred to Lauritzen (1996). In this paper, we introduce a weaker version of this structural Markov property, and show that it is nevertheless sufficient for equivalence to a certain exponential family, and therefore to a factorization of the graph law. This gives us a more flexible family of graph priors for use in modelling data. We show that the advantages of conjugacy, and its favourable computational implications, remain true in this broader class, and we illustrate the richer structures that are generated by such priors. Efficient prior and posterior sampling from decomposable graphical models can be performed with the junction tree sampler of Green & Thomas (2013). 2. The weak structural Markov property 2.1. Notation and terminology We follow the terminology for graphs and graphical models of Lauritzen (1996), with a few exceptions and additions, noted here. Many of these terms are also used by Byrne & Dawid (2015). We use the term graph law for the distribution of a random graph, but do not use a different symbol, for example $$\tilde{\mathcal{G}}$$, for a random graph. For any graph $$\mathcal{G}$$ on a vertex set $$V$$ and any subset $$A\subseteq V$$, $$\mathcal{G}_A$$ is the subgraph induced on vertex set $$A$$; its edges are those of $$\mathcal{G}$$ joining vertices that are both in $$A$$. A complete subgraph is one where all pairs of vertices are joined. If $$\mathcal{G}_A$$ is complete and maximal, in the sense that $$\mathcal{G}_B$$ is not complete for any superset $$B\supset A$$, then $$A$$ is a clique. Here and throughout the paper, the symbols $$\supset$$ and $$\subset$$ refer to strict inclusion. A junction tree based on a decomposable graph $$\mathcal{G}$$ on vertex set $$V$$ is any graph whose vertices are the cliques of $$\mathcal{G}$$, joined by edges in such a way that for any $$A\subseteq V$$, those vertices of the junction tree containing $$A$$ form a connected subtree. A separator is the intersection of two adjacent cliques in any junction tree. As in Green & Thomas (2013), we adopt the convention that we allow separators to be empty, with the effect that every junction tree is connected. A covering pair is any pair $$(A,B)$$ of subsets of $$V$$ such that $$A\cup B=V$$; $$(A,B)$$ is a decomposition if $$A\cap B$$ is complete and separates $$A\setminus B$$ and $$B\setminus A$$. Figure 1 illustrates the idea of a decomposition. Fig. 1. View largeDownload slide Example of a decomposition: $$A\cap B$$ is complete and separates $$A\setminus B$$ and $$B \setminus A$$. Fig. 1. View largeDownload slide Example of a decomposition: $$A\cap B$$ is complete and separates $$A\setminus B$$ and $$B \setminus A$$. 2.2. Definitions We begin with the definition of the structural Markov property from Byrne & Dawid (2015). Definition 1 (Structural Markov property). A graph law $$\mathfrak{G}(\mathcal{G})$$ over the set $$\mathfrak{U}$$ of undirected decomposable graphs on $$V$$ is structurally Markov if for any covering pair $$(A,B)$$, we have  $$ \mathcal{G}_A \perp\!\!\!\perp \mathcal{G}_B \mid \{\mathcal{G} \in \mathfrak{U}(A,B)\} \quad[\mathfrak{G}], $$ where $$\mathfrak{U}(A,B)$$ is the set of decomposable graphs for which $$(A,B)$$ is a decomposition. The various conditional independence statements each restrict the graph law, so we can weaken the definition by reducing the number of such statements, for example by replacing the conditioning set by a smaller one. This motivates our definition. Definition 2 (Weak structural Markov property). A graph law $$\mathfrak{G}(\mathcal{G})$$ over the set $$\mathfrak{U}$$ of undirected decomposable graphs on $$V$$ is weakly structurally Markov if for any covering pair $$(A,B)$$, we have  $$ \mathcal{G}_A \perp\!\!\!\perp \mathcal{G}_B \mid \{\mathcal{G} \in \mathfrak{U}^\star(A,B)\} \quad[\mathfrak{G}], $$ where $$\mathfrak{U}^\star(A,B)$$ is the set of decomposable graphs for which $$(A,B)$$ is a decomposition and $$A\cap B$$ is a clique, i.e., a maximal complete subgraph, in $$\mathcal{G}_A$$. The only difference from the structural Markov property is that we condition on the event $$\mathfrak{U}^\star(A,B)$$, not $$\mathfrak{U}(A,B)$$, so we only require independence when $$A\cap B$$ is a clique in $$\mathcal{G}_A$$, that is, when it is maximal in $$\mathcal{G}_A$$; it is already complete because $$(A,B)$$ is a decomposition. Obviously, by symmetry, $$\mathfrak{U}^\star(A,B)$$ could be defined with $$A$$ and $$B$$ interchanged without changing the meaning, but it is not the same as conditioning on the set of decomposable graphs for which $$(A,B)$$ is a decomposition and $$A\cap B$$ is a clique in at least one of $$\mathcal{G}_A$$ and $$\mathcal{G}_B$$, since in the conditional independence statement, it is $$\mathcal{G}$$ that is random, not $$(A,B)$$. The weak structural Markov property is illustrated in Fig. 2. Fig. 2. View largeDownload slide Illustration of the weak structural Markov property on a five-vertex graph: numbering the five vertices clockwise from top left, $$A$$ and $$B$$ consist of the vertices $$\{1,2,4,5\}$$ and $$\{2,3,4\}$$ respectively. For the graph $$\mathcal{G}$$ to lie in $$\mathfrak{U}^\star(A,B)$$, the edge $$(2,4)$$ must always be present, and neither $$1$$ nor $$5$$ can be connected to both $$2$$ and $$4$$. The graph law must choose independently among the 16 possibilities for $$\mathcal{G}_A$$ on the left and the 4 possibilities for $$\mathcal{G}_B$$ on the right. Under the structural Markov property, there are 16 further possibilities for $$\mathcal{G}_A$$, obtained by allowing $$1$$ and $$5$$ to be connected to both $$2$$ and $$4$$, and the choice among all 32 on the left and the 4 on the right must be made independently. Fig. 2. View largeDownload slide Illustration of the weak structural Markov property on a five-vertex graph: numbering the five vertices clockwise from top left, $$A$$ and $$B$$ consist of the vertices $$\{1,2,4,5\}$$ and $$\{2,3,4\}$$ respectively. For the graph $$\mathcal{G}$$ to lie in $$\mathfrak{U}^\star(A,B)$$, the edge $$(2,4)$$ must always be present, and neither $$1$$ nor $$5$$ can be connected to both $$2$$ and $$4$$. The graph law must choose independently among the 16 possibilities for $$\mathcal{G}_A$$ on the left and the 4 possibilities for $$\mathcal{G}_B$$ on the right. Under the structural Markov property, there are 16 further possibilities for $$\mathcal{G}_A$$, obtained by allowing $$1$$ and $$5$$ to be connected to both $$2$$ and $$4$$, and the choice among all 32 on the left and the 4 on the right must be made independently. 2.3. Clique-separator exponential family We now define an important family of graph laws by an algebraic specification. This family has previously been described, though not named, by Bornn & Caron (2011). These authors do not examine any Markov properties of the family, but advocate it for flexible prior specification. Definition 3 (Clique-separator exponential family and clique-separator factorization laws). The clique-separator exponential family is the exponential family of graph laws over $$\mathfrak{F} \subseteq \mathfrak{U}$$ with $$(t^+,t^-)$$ as the natural statistic with respect to the uniform measure on $$\mathfrak{U}$$, where $$t^+_A=\max(t_A,0)$$, $$t^-_A=\min(t_A,0)$$, and  $$ t_A(\mathcal{G})=\begin{cases} 1, & {\it A\, is\, a\, clique\, in\, }\mathcal{G}, \\ -\nu_A(\mathcal{G}), & {\it A\, is\, a\, separator\, in\, }\mathcal{G}, \\ 0, & {\it otherwise}, \end{cases} $$ with $$\nu_A(\mathcal{G})$$ denoting the multiplicity of separator $$A$$ in $$\mathcal{G}$$. That is, laws in the family have densities of the form  $$ \pi(\mathcal{G}) \propto \exp\{\omega^+ t^+(\mathcal{G})+\omega^- t^-(\mathcal{G})\} $$ where $$\omega^+=\{\omega^+_A: A\subseteq V\}$$ and $$\omega^-=\{\omega^-_A: A\subseteq V\}$$ are real-valued set-indexed parameters, $$t^+(\mathcal{G})=\{t^+_A(\mathcal{G}): A\subseteq V\}$$ and $$t^-(\mathcal{G})=\{t_A^-(\mathcal{G}): A\subseteq V\}$$. Here all vectors indexed by subsets of $$V$$ are listed in a fixed but arbitrary order, and the product of two such vectors is the scalar product. Note that $$t^+_A(\mathcal{G})$$ is simply $$1$$ if $$A$$ is a clique in $$\mathcal{G}$$ and is $$0$$ otherwise, while $$t^-_A(\mathcal{G})$$ is $$-\nu_A(\mathcal{G})$$ if $$S$$ is a separator in $$\mathcal{G}$$ and is again otherwise $$0$$. This density $$\pi$$ can be equivalently written as a clique-separator factorization law   \begin{equation} \pi_{\phi,\psi}(\mathcal{G}) \propto \frac{\prod_{C \in \mathcal{C}} \phi_C}{\prod_{S \in \mathcal{S}}\psi_S}, \end{equation} (1)where $$\mathcal{C}$$ is the set of cliques and $$\mathcal{S}$$ the multiset of separators of $$\mathcal{G}$$, and $$\phi_C=\exp(\omega^+_C)$$ and $$\psi_S=\exp(\omega^-_S)$$; this is the form we prefer to use hereafter. This definition is an immediate generalization of that of the clique exponential family of Byrne & Dawid (2015), in which $$t=t^++t^-$$ is the natural statistic, so $$\omega^+_A$$ and $$\omega^-_A$$ coincide, as do $$\phi_A$$ and $$\psi_A$$. Byrne & Dawid (2015) show that for any fixed vertex set, the structurally Markov laws are precisely those in a clique exponential family. In the next section we show an analogous alignment between the weak structural Markov property and clique-separator factorization laws. 2.4. Main result Theorem 1. A graph law $$\mathfrak{G}$$ over the set $$\mathfrak{U}$$ of undirected decomposable graphs on $$V$$, whose support is all of $$\mathfrak{U}$$, is weakly structural Markov if and only if it is a clique-separator factorization law. Remark 1. Exactly as in Byrne & Dawid (2015, Theorem 3.15) it is possible to weaken the condition of full support, that is, positivity of the density $$\pi$$. It is enough that if $$\mathcal{G}$$ is in the support, so is $$\mathcal{G}^{(C)}$$ for any clique $$C$$ of $$\mathcal{G}$$. Our proof makes use of a compact notation for decomposable graphs, and a kind of ordering of cliques that is more stringent than perfect ordering/enumeration. A decomposable graph is determined by its cliques. We write $$\mathcal{G}^{(C_1,C_2,\ldots)}$$ for the decomposable graph with cliques $$C_1,C_2,\ldots$$. Without ambiguity we can omit singleton cliques from the list. In case the vertex set $$V$$ of the graph is not clear from the context, we emphasize it thus: $$\mathcal{G}^{(C_1,C_2,\ldots)}_V=\mathcal{G}^{(C_1,C_2,\ldots)}$$. In particular, $$\mathcal{G}^{(A)}$$ is the graph on $$V$$ that is complete in $$A$$ and empty otherwise, and $$\mathcal{G}^{(A,B)}$$ is the graph on $$V$$ that is complete on both $$A$$ and $$B$$ and empty otherwise. Recall that, starting from a list of the cliques, we can place these in a perfect sequence and simultaneously construct a junction tree by maintaining two complementary subsets: those cliques visited and those unvisited. We initialize the process by placing an arbitrary clique in the visited set and all others in the unvisited. At each successive stage, we move one unvisited clique into the visited set, choosing arbitrarily from those that are available, i.e., are adjacent to a visited clique in the junction tree; at the same time a new link is added to the junction tree. Definition 4 If at each step $$j$$ we select an available clique, labelling it $$C_j$$, such that the separator $$S_j = C_j \cap \bigcup_{i<j} C_i$$ is not a proper subset of any other separator that would arise by choosing a different available clique, then we call the ordering pluperfect. Clearly, it is computationally convenient and sufficient, but not necessary, to choose the available clique that creates one of the largest of the separators, a construction closely related to the maximum cardinality search of Tarjan & Yannakakis (1984). This shows that a pluperfect ordering always exists and that any clique can be chosen as the first. Lemma 1. Let $$\pi$$ be the density of a weakly structurally Markov graph law on $$V$$, and let $$\mathcal{G}$$ be a decomposable graph on $$V$$. Consider a particular pluperfect ordering $$C_1, \ldots, C_J$$ of the cliques of $$\mathcal{G}$$ and a junction tree in which the links connect $$C_j$$ and $$C_{h(j)}$$ via separator $$S_j$$ for each $$j=2,\ldots,J$$, where $$h(j)\leq j-1$$. For each such $$j$$, let $$R_j$$ be any subset of $$C_{h(j)}$$ that is a proper superset of $$S_j$$. Then for any choice of such $$\{R_j\}$$, we have  $$ \pi(\mathcal{G}) =\prod_j \pi(\mathcal{G}^{(C_j)}) \times \prod_{j\geq 2} \frac{\pi(\mathcal{G}^{(R_j,C_{j)}})}{\pi(\mathcal{G}^{(R_j)})\pi(\mathcal{G}^{(C_j)})}\text{.} $$ Proof. Let $$B=\bigcup_{i=1}^{\,j-1} C_i$$. Set $$A=(V\setminus B) \cup R_j$$. Then $$R_j\cap C_j=S_j$$, $$(A,B)$$ is a decomposition, and $$A\cap B=R_j$$. This intersection $$R_j$$ is a clique in $$\mathcal{G}_A$$. Suppose, for a contradiction, that $$R_j$$ is not a clique in $$\mathcal{G}_A$$, i.e., it is not maximal. Then there exists a vertex $$v$$ in $$A \setminus R_j$$ such that $$R'=R_j \cup \{v\}$$ is complete. So $$R'$$ is a subset of a clique in the original graph $$\mathcal{G}$$. Either all the cliques containing $$R'$$ are among $$\{C_i: i<j\}$$, so that $$v$$ is not in $$A$$, which is a contradiction, or one of them, say $$C_\star$$, is among $$\{C_i: i\geq j\}$$, in which case there is a path in the junction tree between $$C_{h(j)}$$ and $$C_\star$$ with every clique along the path containing $$R_j$$; so there must be a separator that is a superset of $$R_j$$, and hence a strict superset of $$S_j$$, connects to $$C_{h(j)}$$, and is among $$\{S_{j+1}, \ldots, S_J\}$$. This contradicts the assumption that the ordering is pluperfect. This choice of $$(A,B)$$ forms a covering pair and $$\mathcal{G} \in \mathfrak{U}^\star(A,B)$$, so by the weak structural Markov property, we know that $$\mathcal{G}_A$$ and $$\mathcal{G}_B$$ are independent under $$\pi_{A,B}$$, their joint distribution given that $$A\cap B$$ is complete in $$\mathcal{G}^{(C_1, \ldots,C_j)}$$. Thus we have the crossover identity   $$ \pi_{A,B}(\mathcal{G}_A^{(R_j)},\mathcal{G}_B^{(R_j)}) \times \pi_{A,B}(\mathcal{G}_A^{(R_j,C_j)},\mathcal{G}_B^{(C_1,\ldots,C_{j-1})}) = \pi_{A,B}(\mathcal{G}_A^{(R_j)},\mathcal{G}_B^{(C_1,\ldots,C_{j-1})}) \times \pi_{A,B}(\mathcal{G}_A^{(R_j,C_j)},\mathcal{G}_B^{(R_j)}) $$ or, equivalently,   $$ \pi(\mathcal{G}^{(R_j)})\pi(\mathcal{G}^{(C_1,\ldots,C_j)})=\pi(\mathcal{G}^{(C_1,\ldots,C_{j-1})})\pi(\mathcal{G}^{(R_j,C_{j})})\text{.} $$ We can therefore write   \begin{align*} \pi(\mathcal{G}) &= \pi(\mathcal{G}^{(C_1)}) \prod_{j\geq 2} \frac{\pi(\mathcal{G}^{(C_1,\ldots,C_{j})})}{\pi(\mathcal{G}^{(C_1,\ldots,C_{j-1})})}= \pi(\mathcal{G}^{(C_1)}) \prod_{j\geq 2} \frac{\pi(\mathcal{G}^{(R_j,C_{j})})}{\pi(\mathcal{G}^{(R_j)})} \\ & =\prod_j \pi(\mathcal{G}^{(C_j)}) \times \prod_{j\geq 2} \frac{\pi(\mathcal{G}^{(R_j,C_{j})})}{\pi(\mathcal{G}^{(R_j)})\pi(\mathcal{G}^{(C_j)})}.\\[-4.1pc] \end{align*} □ Lemma 2. Let $$\pi$$ be the density of a weakly structurally Markov graph law on $$V$$, and let $$S$$ be any subset of the vertices $$V$$ with $$|S|\leq n-2$$. Then $$\pi(\mathcal{G}^{(R_1,R_2)})/\{\pi(\mathcal{G}^{(R_1)})\pi(\mathcal{G}^{(R_2)})\}$$ depends only on $$S$$, for all pairs of vertices $$R_1$$ and $$R_2$$ such that $$R_1\cup R_2\subseteq V$$, $$R_1\cap R_2=S$$, and both $$R_1$$ and $$R_2$$ are strict supersets of $$S$$. Proof. Consider the decomposable graph $$\mathcal{G}^{(R_1,R_2)}$$ whose unique junction tree has cliques $$R_1$$ and $$R_2$$ and separator $$S$$. Applying Lemma 1 to this graph, we have   $$ \pi(\mathcal{G}^{(R_1,R_2)})=\pi(\mathcal{G}^{(R_1)}) \pi(\mathcal{G}^{(R_2)}) \times \frac{\pi(\mathcal{G}^{(R,R_2)})}{\pi(\mathcal{G}^{(R)})\pi(\mathcal{G}^{(R_2)})}, $$ that is,   $$ \frac{\pi(\mathcal{G}^{(R_1,R_2)})}{\pi(\mathcal{G}^{(R_1)}) \pi(\mathcal{G}^{(R_2)})}=\frac{\pi(\mathcal{G}^{(R,R_2)})}{\pi(\mathcal{G}^{(R)})\pi(\mathcal{G}^{(R_2)})}, $$ for any $$R$$ with $$S\subset R\subseteq R_1$$. This means that any vertices may be added to or removed from $$R_1$$, or by symmetry added to or removed from $$R_2$$, without changing the value of $$\pi(\mathcal{G}^{(R_1,R_2)})/\{\pi(\mathcal{G}^{(R_1)}) \pi(\mathcal{G}^{(R_2)})\}$$, providing it remains true that $$R_1\cup R_2\subseteq V$$, $$R_1\cap R_2=S$$, $$R_1\supset S$$ and $$R_2\supset S$$. But any unordered pair of subsets $$R_1$$ and $$R_2$$ of $$V$$ with $$R_1\cup R_2\subseteq V$$, $$R_1\cap R_2=S$$, $$R_1\supset S$$ and $$R_2\supset S$$ can be transformed stepwise to any other such pair by successively adding or removing vertices to or from one or other of the subsets. Thus $$\pi(\mathcal{G}^{(R_1,R_2)})/\{\pi(\mathcal{G}^{(R_1)})\pi(\mathcal{G}^{(R_2)})\}$$ can depend only on $$S$$; we will denote it by $$1/\psi_S$$. □ Proof of Theorem 1. Suppose that $$\pi$$ is the density of a weakly structurally Markov graph law on $$V$$. For each $$A\subseteq V$$, let $$\phi_A=\pi(\mathcal{G}^{(A)})$$. Then by Lemmas 1 and 2,   $$ \pi(\mathcal{G}) =\frac{\prod_j \phi_{C_j}}{\prod_{j \geq 2} \psi_{S_j}}\text{.} $$ Since $$\mathcal{G}^{(\emptyset)}$$, $$\mathcal{G}^{(\{v\})}$$, $$\mathcal{G}^{(\{w\})}$$ and $$\mathcal{G}^{(\{v,w\})}$$ for distinct vertices $$v,w\in V$$ all denote the same graph, we must have $$\phi_{\{v\}}=\pi(\mathcal{G}^{(\emptyset)})$$ for all $$v$$, and also $$\psi_{\emptyset}=\pi(\mathcal{G}^{(\emptyset)})$$. Under these conditions, the constant of proportionality in (1) is evidently 1. Conversely, it is trivial to show that if the clique-separator factorization property (1) applies to $$\pi$$, then $$\pi$$ is the density of a weakly structurally Markov graph law. □ 2.5. Identifiability of parameters Byrne & Dawid (2015, Proposition 3.14) point out that their $$\{t_A(\mathcal{G})\}$$ values are subject to $$|V|+1$$ linear constraints, $$\sum_{A\subseteq V} t_A(\mathcal{G})=1$$ and $$\sum_{A\ni v} t_A(\mathcal{G})=1$$ for all $$v\in V$$, so that their parameters $$\omega_A$$, or equivalently $$\phi_A$$, are not all identifiable. They obtain identifiability by proposing a standardized vector $$\omega^\star$$, with $$|V|+1$$ necessarily zero entries, that is a linear transform of $$\omega$$. By the same token, the $$|V|+1$$ constraints on $$t_A(\mathcal{G})$$ are linear constraints on $$t^+_A(\mathcal{G})$$ and $$t^-_A(\mathcal{G})$$, and so $$\{\phi_A\}$$ and $$\{\psi_A\}$$ are not all identifiable. We could obtain identifiable parameters by, for example, choosing $$\psi_{\emptyset}=1$$ and $$\phi_{\{v\}}=1$$ for all $$v \in V$$ or, as above, by setting $$\psi_{\emptyset}=\pi(\mathcal{G}^{(\emptyset)})$$ and $$\phi_{\{v\}}=\pi(\mathcal{G}^{(\emptyset)})$$ for all $$v$$; other ways are also possible. Note in addition that $$\emptyset$$ cannot be a clique, and neither $$A=V$$ nor any subset $$A$$ of $$V$$ with $$|A|=|V|-1$$ can be a separator, so the corresponding $$\phi_A$$ and $$\psi_A$$ are never used. The dimension of the space of clique-separator factorization laws is therefore $$2 \times 2^{|V|}-2|V|-3$$, nearly twice that of clique exponential family laws, $$2^{|V|}-|V|-1$$. For example, when $$|V|=3$$, all graphs are decomposable, and all graph laws are clique-separator factorization laws, while clique exponential family laws have dimension 4; when $$|V|=4$$, 61 out of 64 graphs are decomposable, and the dimensions of the two spaces of laws are 21 and 11; when $$|V|=7$$, only 617 675 out of the $$2^{21}$$ graphs are decomposable, and the dimensions are 239 and 120. 3. Some implications for modelling and inference 3.1. Conjugacy and posterior updating As priors for the graph underlying a model $$P(X \mid \mathcal{G})$$ for data $$X$$, clique-separator factorization laws are conjugate for decomposable likelihoods, in the case where there are no unknown parameters in the distribution: given $$X$$ from the model   $$ p(X\mid\mathcal{G}) = \frac{\prod_{C\in \mathcal{C}} \lambda_C(X_C)}{\prod_{S\in \mathcal{S}} \lambda_S(X_S)} =\prod_{A\subseteq V} \lambda_A(X_A)^{t_A(\mathcal{G})}, $$ where $$\lambda_A(X_A)$$ denotes the marginal distribution of $$X_A$$, the posterior for $$\mathcal{G}$$ is   $$ p(\mathcal{G}\mid X) \propto \frac{\prod_{C\in \mathcal{C}} \phi_C\lambda_C(X_C)}{\prod_{S\in \mathcal{S}} \psi_S\lambda_S(X_S)}, $$ that is, a clique-separator factorization law with parameters $$\{\phi_A\lambda_A(x_A)\}$$ and $$\{\psi_A\lambda_A(x_A)\}$$. More generally, when there are parameters in the graph-specific likelihoods, the notions of compatibility and hypercompatibility (Byrne & Dawid, 2015) allow the extension of the idea of structural Markovianity to the joint Markovianity of the graph and the parameters, and give the form of the corresponding posterior. 3.2. Computational implications Computing posterior distributions of graphs on a large scale remains problematic, with Markov chain Monte Carlo methods seemingly the only option except for toy problems, and these methods having notoriously poor mixing. However, the junction tree sampler of Green & Thomas (2013) seems to give acceptable performance for problems of up to a few hundred variables. Posteriors induced by clique-separator factorization law priors are ideal material for these samplers, which explicitly use a clique-separator representation of all graphs and distributions. In Bornn & Caron (2011), a different Markov chain Monte Carlo sampler for clique-separator factorization laws is introduced. We have evidence that the examples shown in their figures are not representative samples from the particular models claimed, due to poor mixing. 3.3. Modelling Here we briefly discuss the way in which choices of particular forms for the parameters $$\phi_A$$ and $$\psi_A$$ govern the qualitative and even quantitative aspects of the graph law. These choices are important in designing a graph law for a particular purpose, whether or not this is prior modelling in Bayesian structural learning. A limitation of clique exponential family models is that because large clique potentials count in favour of a graph, and large separator potentials count against, it is difficult for these laws to encourage the same features in both cliques and separators. For instance, if we choose clique potentials to favour large cliques, we seem to be forced to favour small separators. A popular choice for a graph prior in past work on Bayesian structural learning is the well-known Erdős–Rényi random graph model, in which each of the $$|V|(|V|-1)/2$$ possible edges on the vertex set $$V$$ is present independently, with probability $$p$$. This model is amenable to theoretical study, but realizations of this model typically exhibit no discernible structure. When restricted to decomposable graphs, the Erdős–Rényi model is a rather extreme example of a clique exponential family law, which arises by taking $$\phi_A=\{p/(1-p)\}^{|A|(|A|-1)/2}$$. Again realizations appear unstructured, essentially because of the quadratic dependence on clique or separator size in the exponent of the potentials $$\phi_A$$. For a concrete example of a model with much more structure, suppose that our decomposable graph represents a communication network. There are two types of vertices, hubs and non-hubs. Adjacent vertices can all communicate with each other, but only hubs will relay messages. So, for a non-hub to communicate with a nonadjacent non-hub, there must be a path in the graph from one to the other where all intermediate nodes are hubs. This example has the interesting feature that using only local properties, it enforces a global property, namely universal communication. A necessary and sufficient condition for universal communication is that every separator contains a hub. This implies that either the graph is a single clique, or every clique must also contain a hub. To model this with a clique-separator factorization law, we can set the separator potential to $$\psi_S= \infty$$ if $$S$$ does not contain a hub. We are free to set the remaining values of $$\psi_S$$ and the values of the clique potentials $$\phi_C$$ for all cliques $$C$$ as we wish. In this example, these parameters are chosen to control the sizes of cliques and separators; specifically, $$\phi_C=\exp(-4|C|)$$ and $$\psi_S=\exp(-0.5|S|)$$ when $$S$$ contains a hub, which discourages both large cliques and separators containing only hubs. The graph probability $$\pi(\mathcal{G})$$ will be zero for all decomposable graphs that fail to allow universal communication, and otherwise will follow the distribution implied by the potentials. This example requires the slight generalization of Theorem 1 mentioned in Remark 1 following it. Figure 3 shows a sample from this model, generated using a junction tree sampler. Fig. 3. View largeDownload slide Simulated graph from a clique-separator factorization model, with $$\phi_C=\exp(-4|C|)$$; $$\psi_S=\exp(-0.5|S|)$$ if $$S$$ contains a hub (dark colour), and is $$\infty$$ otherwise. There are 200 vertices including 20 hubs. Fig. 3. View largeDownload slide Simulated graph from a clique-separator factorization model, with $$\phi_C=\exp(-4|C|)$$; $$\psi_S=\exp(-0.5|S|)$$ if $$S$$ contains a hub (dark colour), and is $$\infty$$ otherwise. There are 200 vertices including 20 hubs. 3.4. Significance for statistical analysis This is not the place for a comprehensive investigation of the practical implications of adopting prior models from the clique-separator factorization family in statistical analysis, something we intend to explore in later work. Instead, we extend the discussion of the example of the previous subsection to draw some lessons about inference. First we make the simple but important observation that the support of the posterior distribution of the graph cannot be greater than that of the prior. So, in the example of the hub model, the posterior will be concentrated on decomposable graphs where every separator contains a hub, and realizations will have some of the character of Fig. 3. There has been considerable interest recently in learning graphical models using methods that implicitly or explicitly favour hubs, defined in various ways with some affinity to our use of the term; see, for example, Mohan et al. (2014), Tan et al. (2014) and Zhang et al. (2017). These are often motivated by genetic applications in which hubs may be believed to correspond to genes of special significance in gene regulation. These methods usually assume that the labelling of nodes as hubs is unknown, but it is straightforward to extend our hub model to put a probability model on this labelling, and to augment the Monte Carlo posterior sampler with a move that reallocates the hub labels, using any process that maintains the presence of at least one hub in every separator. This is a strong hint of the possibility of a fully Bayesian procedure that learns graphical models with hubs. 3.5. The cost of assuming the graph is decomposable when it is not The assumption of a decomposable graph law as prior on Bayesian structural learning is of course a strong restriction. There is no reason why nature should have been kind enough to generate data from graphical models that are decomposable. However, the computational advantages of such an assumption are tremendous; see the experiments and thorough review in Jones et al. (2005). The position has not changed much since that paper was written, so far as computation of exact posteriors is concerned. However, an optimistic perspective on this conflict between prior reasonableness and computational tractability can be justified by the work of Fitch et al. (2014). For the zero-mean Gaussian case, with a hyper-inverse-Wishart prior on the concentration matrix, they conclude that asymptotically the posterior will converge to graphical structures that are minimal triangulations of the true graph, the marginal loglikelihood ratio comparing different minimal triangulations is stochastically bounded and appears to remain data-dependent regardless of the sample size, and the covariance matrices corresponding to the different minimal triangulations are essentially equivalent, so model averaging is of minimal benefit. Informally, restriction to decomposable graphs does not really matter, with the right parameter priors; we can still fit essentially the right model, though perhaps inference on the graph itself should not be over-interpreted. 4. An even weaker structural Markov property It is tempting to wonder if clique-separator factorization is equivalent to a simpler definition of weak structural Markovianity, one that places yet fewer conditional independence constraints on $$\mathfrak{G}$$; the existence of the theorem makes this possibility implausible, but it remains conceivably possible that a smaller collection of conditional independences could be equivalent. The following counterexample rules out the possibility of requiring only that   \begin{equation} \mathcal{G}_A \perp\!\!\!\perp \mathcal{G}_B \mid \{\mathcal{G} \in \mathfrak{U}^{+}(A,B)\} \quad[\mathfrak{G}], \end{equation} (2) where $$\mathfrak{U}^{+}(A,B)$$ is the set of decomposable graphs for which $$(A,B)$$ is a decomposition and $$A\cap B$$ is a clique in $$\mathcal{G}$$. Example 1. Consider graphs on vertices $$\{1,2,3,4\}$$. The only nontrivial conditional independence statements implied by property (2) arise from decompositions $$(A,B)$$ where both $$A$$ and $$B$$ have three vertices and $$A\cap B$$ has two. Suppose $$A=\{1,2,3\}=123$$ for short, and that $$B=234$$. Given that $$23$$ is a clique in $$\mathcal{G}$$, $$\mathcal{G}_A$$ may be $$\mathcal{G}^{(23)}_A$$, $$\mathcal{G}^{(12,23)}_A$$ or $$\mathcal{G}^{(13, 23)}_A$$, and similarly $$\mathcal{G}_B$$ may be $$\mathcal{G}^{(23)}_B$$, $$\mathcal{G}^{(23,24)}_B$$ or $$\mathcal{G}^{(23,34)}_B$$. These two choices are independent, by (2), and this imposes four equality constraints on the graph law. There are six different choices for the two-vertex clique $$A\cap B$$, so not more than 24 constraints overall; they may not all be independent. There are 61 decomposable graphs on four vertices, so the set of graph laws satisfying (2) has dimension at least $$60-24=36$$. But as we saw in §2.5, the set of clique-separator factorization laws has dimension $$2 \times 2^{|V|}-2|V|-3 = 21$$. Essentially, assumption (2) does not constrain the graph law sufficiently to give the explicit clique-separator factorization. In fact, it is easy to show that (2) places no constraints on $$\pi(\mathcal{G})$$ for any connected $$\mathcal{G}$$ consisting of one or two cliques. Acknowledgement We thank the editors and referees for their constructive criticisms of our manuscript, and we are also grateful to Luke Bornn, Simon Byrne, François Caron, Phil Dawid and Steffen Lauritzen for stimulating discussions and correspondence. Alun Thomas was supported in part by the National Center for Research Resources and the National Center for Advancing Translational Sciences of the U.S. National Institutes of Health. Peter Green is also a Professorial Fellow at the University of Bristol. References Bornn L. & Caron F. ( 2011). Bayesian clustering in decomposable graphs. Bayesian Anal.  6, 829– 46. Google Scholar CrossRef Search ADS   Byrne S. & Dawid A. P. ( 2015). Structural Markov graph laws for Bayesian model uncertainty.  Ann. Statist.  43, 1647– 81. Fitch A. M. , Jones M. B. & Massam H. ( 2014). The performance of covariance selection methods that consider decomposable models only. Bayesian Anal.  9, 659– 84. Google Scholar CrossRef Search ADS   Green P. J. & Thomas A. ( 2013). Sampling decomposable graphs using a Markov chain on junction trees. Biometrika  100, 91– 110. Google Scholar CrossRef Search ADS   Jones B. , Carvalho C., Dobra A., Hans C., Carter C. & West M. ( 2005). Experiments in stochastic compuation for high-dimensional graphical models. Statist. Sci.  20, 388– 400. Google Scholar CrossRef Search ADS   Lauritzen S. L. ( 1996). Graphical Models.  Oxford: Clarendon Press. Mohan K. , London P., Fazel M., Witten D. & Lee S.-I. ( 2014). Node-based learning of multiple Gaussian graphical models. J. Mach. Learn. Res.  15, 445– 88. Google Scholar PubMed  Tan K. M. , London P., Mohan K., Lee S.-I., Fazel M. & Witten D. M. ( 2014). Learning graphical models with hubs. J. Mach. Learn. Res.  15, 3297– 331. Google Scholar PubMed  Tarjan R. E. & Yannakakis M. ( 1984). Simple linear-time algorithms to test chordality of graphs, test acyclicity of hypergraphs, and selectively reduce acyclic hypergraphs. SIAM J. Comp.  13, 566– 79. Google Scholar CrossRef Search ADS   Zhang X.-F. , Ou-Yang L. & Yan H. ( 2017). Incorporating prior information into differential network analysis using non-paranormal graphical models. Bioinformatics  33, 2436– 45. Google Scholar CrossRef Search ADS PubMed  © 2017 Biometrika Trust This is an Open Access article distributed under the terms of the Creative Commons Attribution License (http://creativecommons.org/licenses/by/4.0/), which permits unrestricted reuse, distribution, and reproduction in any medium, provided the original work is properly cited. http://www.deepdyve.com/assets/images/DeepDyve-Logo-lg.png Biometrika Oxford University Press

A structural Markov property for decomposable graph laws that allows control of clique intersections

Loading next page...
 
/lp/oxford-university-press/a-structural-markov-property-for-decomposable-graph-laws-that-allows-9Q550l0RRj
Publisher
Oxford University Press
Copyright
© 2017 Biometrika Trust
ISSN
0006-3444
eISSN
1464-3510
D.O.I.
10.1093/biomet/asx072
Publisher site
See Article on Publisher Site

Abstract

Summary We present a new kind of structural Markov property for probabilistic laws on decomposable graphs, which allows the explicit control of interactions between cliques and so is capable of encoding some interesting structure. We prove the equivalence of this property to an exponential family assumption, and discuss identifiability, modelling, inferential and computational implications. 1. Introduction The conditional independence properties among components of a multivariate distribution are key to understanding its structure, and precisely describe the qualitative manner in which information flows among the variables. These properties are well represented by a graphical model, in which nodes, representing variables in the model, are connected by undirected edges, encoding the conditional independence properties of the distribution (Lauritzen, 1996). Inference about the underlying graph from observed data is therefore an important task, sometimes known as structural learning. Bayesian structural learning requires specification of a prior distribution on graphs, and there is a need for a flexible but tractable family of such priors, capable of representing a variety of prior beliefs about the conditional independence structure. In the interests of tractability and scalability, there has been a strong focus on the case where the true graph may be assumed to be decomposable. Just as this underlying graph localizes the pattern of dependence among variables, it is appealing that the prior on the graph itself should exhibit dependence locally, in the same graphical sense. Informally, the presence or absence of two edges should be independent when they are sufficiently separated by other edges in the graph. The first class of graph priors demonstrating such a structural Markov property was presented in a 2012 Cambridge University PhD thesis by Simon Byrne, and later published in Byrne & Dawid (2015). That priors with this property are also tractable arises from an equivalence demonstrated by Byrne & Dawid (2015), between their structural Markov property for decomposable graphs and the assumption that the graph law follows a clique exponential family. This important result is yet another example of a theme in the literature, making a connection between systems of conditional independence statements among random variables, often encoded graphically, and factorizations of the joint probability distribution of these variables. Examples include the global Markov property for undirected graphs, which is necessary, and under an additional condition sufficient, for the joint distribution to factorize as a product of potentials over cliques; the Markov property for directed acyclic graphs, which is equivalent to the existence of a factorization of the joint distributions into child-given-parents conditional distributions; and the existence of a factorization into clique and separator marginal distributions for undirected decomposable graphs. All of these results are now well known, and for these and other essentials of graphical models, the reader is referred to Lauritzen (1996). In this paper, we introduce a weaker version of this structural Markov property, and show that it is nevertheless sufficient for equivalence to a certain exponential family, and therefore to a factorization of the graph law. This gives us a more flexible family of graph priors for use in modelling data. We show that the advantages of conjugacy, and its favourable computational implications, remain true in this broader class, and we illustrate the richer structures that are generated by such priors. Efficient prior and posterior sampling from decomposable graphical models can be performed with the junction tree sampler of Green & Thomas (2013). 2. The weak structural Markov property 2.1. Notation and terminology We follow the terminology for graphs and graphical models of Lauritzen (1996), with a few exceptions and additions, noted here. Many of these terms are also used by Byrne & Dawid (2015). We use the term graph law for the distribution of a random graph, but do not use a different symbol, for example $$\tilde{\mathcal{G}}$$, for a random graph. For any graph $$\mathcal{G}$$ on a vertex set $$V$$ and any subset $$A\subseteq V$$, $$\mathcal{G}_A$$ is the subgraph induced on vertex set $$A$$; its edges are those of $$\mathcal{G}$$ joining vertices that are both in $$A$$. A complete subgraph is one where all pairs of vertices are joined. If $$\mathcal{G}_A$$ is complete and maximal, in the sense that $$\mathcal{G}_B$$ is not complete for any superset $$B\supset A$$, then $$A$$ is a clique. Here and throughout the paper, the symbols $$\supset$$ and $$\subset$$ refer to strict inclusion. A junction tree based on a decomposable graph $$\mathcal{G}$$ on vertex set $$V$$ is any graph whose vertices are the cliques of $$\mathcal{G}$$, joined by edges in such a way that for any $$A\subseteq V$$, those vertices of the junction tree containing $$A$$ form a connected subtree. A separator is the intersection of two adjacent cliques in any junction tree. As in Green & Thomas (2013), we adopt the convention that we allow separators to be empty, with the effect that every junction tree is connected. A covering pair is any pair $$(A,B)$$ of subsets of $$V$$ such that $$A\cup B=V$$; $$(A,B)$$ is a decomposition if $$A\cap B$$ is complete and separates $$A\setminus B$$ and $$B\setminus A$$. Figure 1 illustrates the idea of a decomposition. Fig. 1. View largeDownload slide Example of a decomposition: $$A\cap B$$ is complete and separates $$A\setminus B$$ and $$B \setminus A$$. Fig. 1. View largeDownload slide Example of a decomposition: $$A\cap B$$ is complete and separates $$A\setminus B$$ and $$B \setminus A$$. 2.2. Definitions We begin with the definition of the structural Markov property from Byrne & Dawid (2015). Definition 1 (Structural Markov property). A graph law $$\mathfrak{G}(\mathcal{G})$$ over the set $$\mathfrak{U}$$ of undirected decomposable graphs on $$V$$ is structurally Markov if for any covering pair $$(A,B)$$, we have  $$ \mathcal{G}_A \perp\!\!\!\perp \mathcal{G}_B \mid \{\mathcal{G} \in \mathfrak{U}(A,B)\} \quad[\mathfrak{G}], $$ where $$\mathfrak{U}(A,B)$$ is the set of decomposable graphs for which $$(A,B)$$ is a decomposition. The various conditional independence statements each restrict the graph law, so we can weaken the definition by reducing the number of such statements, for example by replacing the conditioning set by a smaller one. This motivates our definition. Definition 2 (Weak structural Markov property). A graph law $$\mathfrak{G}(\mathcal{G})$$ over the set $$\mathfrak{U}$$ of undirected decomposable graphs on $$V$$ is weakly structurally Markov if for any covering pair $$(A,B)$$, we have  $$ \mathcal{G}_A \perp\!\!\!\perp \mathcal{G}_B \mid \{\mathcal{G} \in \mathfrak{U}^\star(A,B)\} \quad[\mathfrak{G}], $$ where $$\mathfrak{U}^\star(A,B)$$ is the set of decomposable graphs for which $$(A,B)$$ is a decomposition and $$A\cap B$$ is a clique, i.e., a maximal complete subgraph, in $$\mathcal{G}_A$$. The only difference from the structural Markov property is that we condition on the event $$\mathfrak{U}^\star(A,B)$$, not $$\mathfrak{U}(A,B)$$, so we only require independence when $$A\cap B$$ is a clique in $$\mathcal{G}_A$$, that is, when it is maximal in $$\mathcal{G}_A$$; it is already complete because $$(A,B)$$ is a decomposition. Obviously, by symmetry, $$\mathfrak{U}^\star(A,B)$$ could be defined with $$A$$ and $$B$$ interchanged without changing the meaning, but it is not the same as conditioning on the set of decomposable graphs for which $$(A,B)$$ is a decomposition and $$A\cap B$$ is a clique in at least one of $$\mathcal{G}_A$$ and $$\mathcal{G}_B$$, since in the conditional independence statement, it is $$\mathcal{G}$$ that is random, not $$(A,B)$$. The weak structural Markov property is illustrated in Fig. 2. Fig. 2. View largeDownload slide Illustration of the weak structural Markov property on a five-vertex graph: numbering the five vertices clockwise from top left, $$A$$ and $$B$$ consist of the vertices $$\{1,2,4,5\}$$ and $$\{2,3,4\}$$ respectively. For the graph $$\mathcal{G}$$ to lie in $$\mathfrak{U}^\star(A,B)$$, the edge $$(2,4)$$ must always be present, and neither $$1$$ nor $$5$$ can be connected to both $$2$$ and $$4$$. The graph law must choose independently among the 16 possibilities for $$\mathcal{G}_A$$ on the left and the 4 possibilities for $$\mathcal{G}_B$$ on the right. Under the structural Markov property, there are 16 further possibilities for $$\mathcal{G}_A$$, obtained by allowing $$1$$ and $$5$$ to be connected to both $$2$$ and $$4$$, and the choice among all 32 on the left and the 4 on the right must be made independently. Fig. 2. View largeDownload slide Illustration of the weak structural Markov property on a five-vertex graph: numbering the five vertices clockwise from top left, $$A$$ and $$B$$ consist of the vertices $$\{1,2,4,5\}$$ and $$\{2,3,4\}$$ respectively. For the graph $$\mathcal{G}$$ to lie in $$\mathfrak{U}^\star(A,B)$$, the edge $$(2,4)$$ must always be present, and neither $$1$$ nor $$5$$ can be connected to both $$2$$ and $$4$$. The graph law must choose independently among the 16 possibilities for $$\mathcal{G}_A$$ on the left and the 4 possibilities for $$\mathcal{G}_B$$ on the right. Under the structural Markov property, there are 16 further possibilities for $$\mathcal{G}_A$$, obtained by allowing $$1$$ and $$5$$ to be connected to both $$2$$ and $$4$$, and the choice among all 32 on the left and the 4 on the right must be made independently. 2.3. Clique-separator exponential family We now define an important family of graph laws by an algebraic specification. This family has previously been described, though not named, by Bornn & Caron (2011). These authors do not examine any Markov properties of the family, but advocate it for flexible prior specification. Definition 3 (Clique-separator exponential family and clique-separator factorization laws). The clique-separator exponential family is the exponential family of graph laws over $$\mathfrak{F} \subseteq \mathfrak{U}$$ with $$(t^+,t^-)$$ as the natural statistic with respect to the uniform measure on $$\mathfrak{U}$$, where $$t^+_A=\max(t_A,0)$$, $$t^-_A=\min(t_A,0)$$, and  $$ t_A(\mathcal{G})=\begin{cases} 1, & {\it A\, is\, a\, clique\, in\, }\mathcal{G}, \\ -\nu_A(\mathcal{G}), & {\it A\, is\, a\, separator\, in\, }\mathcal{G}, \\ 0, & {\it otherwise}, \end{cases} $$ with $$\nu_A(\mathcal{G})$$ denoting the multiplicity of separator $$A$$ in $$\mathcal{G}$$. That is, laws in the family have densities of the form  $$ \pi(\mathcal{G}) \propto \exp\{\omega^+ t^+(\mathcal{G})+\omega^- t^-(\mathcal{G})\} $$ where $$\omega^+=\{\omega^+_A: A\subseteq V\}$$ and $$\omega^-=\{\omega^-_A: A\subseteq V\}$$ are real-valued set-indexed parameters, $$t^+(\mathcal{G})=\{t^+_A(\mathcal{G}): A\subseteq V\}$$ and $$t^-(\mathcal{G})=\{t_A^-(\mathcal{G}): A\subseteq V\}$$. Here all vectors indexed by subsets of $$V$$ are listed in a fixed but arbitrary order, and the product of two such vectors is the scalar product. Note that $$t^+_A(\mathcal{G})$$ is simply $$1$$ if $$A$$ is a clique in $$\mathcal{G}$$ and is $$0$$ otherwise, while $$t^-_A(\mathcal{G})$$ is $$-\nu_A(\mathcal{G})$$ if $$S$$ is a separator in $$\mathcal{G}$$ and is again otherwise $$0$$. This density $$\pi$$ can be equivalently written as a clique-separator factorization law   \begin{equation} \pi_{\phi,\psi}(\mathcal{G}) \propto \frac{\prod_{C \in \mathcal{C}} \phi_C}{\prod_{S \in \mathcal{S}}\psi_S}, \end{equation} (1)where $$\mathcal{C}$$ is the set of cliques and $$\mathcal{S}$$ the multiset of separators of $$\mathcal{G}$$, and $$\phi_C=\exp(\omega^+_C)$$ and $$\psi_S=\exp(\omega^-_S)$$; this is the form we prefer to use hereafter. This definition is an immediate generalization of that of the clique exponential family of Byrne & Dawid (2015), in which $$t=t^++t^-$$ is the natural statistic, so $$\omega^+_A$$ and $$\omega^-_A$$ coincide, as do $$\phi_A$$ and $$\psi_A$$. Byrne & Dawid (2015) show that for any fixed vertex set, the structurally Markov laws are precisely those in a clique exponential family. In the next section we show an analogous alignment between the weak structural Markov property and clique-separator factorization laws. 2.4. Main result Theorem 1. A graph law $$\mathfrak{G}$$ over the set $$\mathfrak{U}$$ of undirected decomposable graphs on $$V$$, whose support is all of $$\mathfrak{U}$$, is weakly structural Markov if and only if it is a clique-separator factorization law. Remark 1. Exactly as in Byrne & Dawid (2015, Theorem 3.15) it is possible to weaken the condition of full support, that is, positivity of the density $$\pi$$. It is enough that if $$\mathcal{G}$$ is in the support, so is $$\mathcal{G}^{(C)}$$ for any clique $$C$$ of $$\mathcal{G}$$. Our proof makes use of a compact notation for decomposable graphs, and a kind of ordering of cliques that is more stringent than perfect ordering/enumeration. A decomposable graph is determined by its cliques. We write $$\mathcal{G}^{(C_1,C_2,\ldots)}$$ for the decomposable graph with cliques $$C_1,C_2,\ldots$$. Without ambiguity we can omit singleton cliques from the list. In case the vertex set $$V$$ of the graph is not clear from the context, we emphasize it thus: $$\mathcal{G}^{(C_1,C_2,\ldots)}_V=\mathcal{G}^{(C_1,C_2,\ldots)}$$. In particular, $$\mathcal{G}^{(A)}$$ is the graph on $$V$$ that is complete in $$A$$ and empty otherwise, and $$\mathcal{G}^{(A,B)}$$ is the graph on $$V$$ that is complete on both $$A$$ and $$B$$ and empty otherwise. Recall that, starting from a list of the cliques, we can place these in a perfect sequence and simultaneously construct a junction tree by maintaining two complementary subsets: those cliques visited and those unvisited. We initialize the process by placing an arbitrary clique in the visited set and all others in the unvisited. At each successive stage, we move one unvisited clique into the visited set, choosing arbitrarily from those that are available, i.e., are adjacent to a visited clique in the junction tree; at the same time a new link is added to the junction tree. Definition 4 If at each step $$j$$ we select an available clique, labelling it $$C_j$$, such that the separator $$S_j = C_j \cap \bigcup_{i<j} C_i$$ is not a proper subset of any other separator that would arise by choosing a different available clique, then we call the ordering pluperfect. Clearly, it is computationally convenient and sufficient, but not necessary, to choose the available clique that creates one of the largest of the separators, a construction closely related to the maximum cardinality search of Tarjan & Yannakakis (1984). This shows that a pluperfect ordering always exists and that any clique can be chosen as the first. Lemma 1. Let $$\pi$$ be the density of a weakly structurally Markov graph law on $$V$$, and let $$\mathcal{G}$$ be a decomposable graph on $$V$$. Consider a particular pluperfect ordering $$C_1, \ldots, C_J$$ of the cliques of $$\mathcal{G}$$ and a junction tree in which the links connect $$C_j$$ and $$C_{h(j)}$$ via separator $$S_j$$ for each $$j=2,\ldots,J$$, where $$h(j)\leq j-1$$. For each such $$j$$, let $$R_j$$ be any subset of $$C_{h(j)}$$ that is a proper superset of $$S_j$$. Then for any choice of such $$\{R_j\}$$, we have  $$ \pi(\mathcal{G}) =\prod_j \pi(\mathcal{G}^{(C_j)}) \times \prod_{j\geq 2} \frac{\pi(\mathcal{G}^{(R_j,C_{j)}})}{\pi(\mathcal{G}^{(R_j)})\pi(\mathcal{G}^{(C_j)})}\text{.} $$ Proof. Let $$B=\bigcup_{i=1}^{\,j-1} C_i$$. Set $$A=(V\setminus B) \cup R_j$$. Then $$R_j\cap C_j=S_j$$, $$(A,B)$$ is a decomposition, and $$A\cap B=R_j$$. This intersection $$R_j$$ is a clique in $$\mathcal{G}_A$$. Suppose, for a contradiction, that $$R_j$$ is not a clique in $$\mathcal{G}_A$$, i.e., it is not maximal. Then there exists a vertex $$v$$ in $$A \setminus R_j$$ such that $$R'=R_j \cup \{v\}$$ is complete. So $$R'$$ is a subset of a clique in the original graph $$\mathcal{G}$$. Either all the cliques containing $$R'$$ are among $$\{C_i: i<j\}$$, so that $$v$$ is not in $$A$$, which is a contradiction, or one of them, say $$C_\star$$, is among $$\{C_i: i\geq j\}$$, in which case there is a path in the junction tree between $$C_{h(j)}$$ and $$C_\star$$ with every clique along the path containing $$R_j$$; so there must be a separator that is a superset of $$R_j$$, and hence a strict superset of $$S_j$$, connects to $$C_{h(j)}$$, and is among $$\{S_{j+1}, \ldots, S_J\}$$. This contradicts the assumption that the ordering is pluperfect. This choice of $$(A,B)$$ forms a covering pair and $$\mathcal{G} \in \mathfrak{U}^\star(A,B)$$, so by the weak structural Markov property, we know that $$\mathcal{G}_A$$ and $$\mathcal{G}_B$$ are independent under $$\pi_{A,B}$$, their joint distribution given that $$A\cap B$$ is complete in $$\mathcal{G}^{(C_1, \ldots,C_j)}$$. Thus we have the crossover identity   $$ \pi_{A,B}(\mathcal{G}_A^{(R_j)},\mathcal{G}_B^{(R_j)}) \times \pi_{A,B}(\mathcal{G}_A^{(R_j,C_j)},\mathcal{G}_B^{(C_1,\ldots,C_{j-1})}) = \pi_{A,B}(\mathcal{G}_A^{(R_j)},\mathcal{G}_B^{(C_1,\ldots,C_{j-1})}) \times \pi_{A,B}(\mathcal{G}_A^{(R_j,C_j)},\mathcal{G}_B^{(R_j)}) $$ or, equivalently,   $$ \pi(\mathcal{G}^{(R_j)})\pi(\mathcal{G}^{(C_1,\ldots,C_j)})=\pi(\mathcal{G}^{(C_1,\ldots,C_{j-1})})\pi(\mathcal{G}^{(R_j,C_{j})})\text{.} $$ We can therefore write   \begin{align*} \pi(\mathcal{G}) &= \pi(\mathcal{G}^{(C_1)}) \prod_{j\geq 2} \frac{\pi(\mathcal{G}^{(C_1,\ldots,C_{j})})}{\pi(\mathcal{G}^{(C_1,\ldots,C_{j-1})})}= \pi(\mathcal{G}^{(C_1)}) \prod_{j\geq 2} \frac{\pi(\mathcal{G}^{(R_j,C_{j})})}{\pi(\mathcal{G}^{(R_j)})} \\ & =\prod_j \pi(\mathcal{G}^{(C_j)}) \times \prod_{j\geq 2} \frac{\pi(\mathcal{G}^{(R_j,C_{j})})}{\pi(\mathcal{G}^{(R_j)})\pi(\mathcal{G}^{(C_j)})}.\\[-4.1pc] \end{align*} □ Lemma 2. Let $$\pi$$ be the density of a weakly structurally Markov graph law on $$V$$, and let $$S$$ be any subset of the vertices $$V$$ with $$|S|\leq n-2$$. Then $$\pi(\mathcal{G}^{(R_1,R_2)})/\{\pi(\mathcal{G}^{(R_1)})\pi(\mathcal{G}^{(R_2)})\}$$ depends only on $$S$$, for all pairs of vertices $$R_1$$ and $$R_2$$ such that $$R_1\cup R_2\subseteq V$$, $$R_1\cap R_2=S$$, and both $$R_1$$ and $$R_2$$ are strict supersets of $$S$$. Proof. Consider the decomposable graph $$\mathcal{G}^{(R_1,R_2)}$$ whose unique junction tree has cliques $$R_1$$ and $$R_2$$ and separator $$S$$. Applying Lemma 1 to this graph, we have   $$ \pi(\mathcal{G}^{(R_1,R_2)})=\pi(\mathcal{G}^{(R_1)}) \pi(\mathcal{G}^{(R_2)}) \times \frac{\pi(\mathcal{G}^{(R,R_2)})}{\pi(\mathcal{G}^{(R)})\pi(\mathcal{G}^{(R_2)})}, $$ that is,   $$ \frac{\pi(\mathcal{G}^{(R_1,R_2)})}{\pi(\mathcal{G}^{(R_1)}) \pi(\mathcal{G}^{(R_2)})}=\frac{\pi(\mathcal{G}^{(R,R_2)})}{\pi(\mathcal{G}^{(R)})\pi(\mathcal{G}^{(R_2)})}, $$ for any $$R$$ with $$S\subset R\subseteq R_1$$. This means that any vertices may be added to or removed from $$R_1$$, or by symmetry added to or removed from $$R_2$$, without changing the value of $$\pi(\mathcal{G}^{(R_1,R_2)})/\{\pi(\mathcal{G}^{(R_1)}) \pi(\mathcal{G}^{(R_2)})\}$$, providing it remains true that $$R_1\cup R_2\subseteq V$$, $$R_1\cap R_2=S$$, $$R_1\supset S$$ and $$R_2\supset S$$. But any unordered pair of subsets $$R_1$$ and $$R_2$$ of $$V$$ with $$R_1\cup R_2\subseteq V$$, $$R_1\cap R_2=S$$, $$R_1\supset S$$ and $$R_2\supset S$$ can be transformed stepwise to any other such pair by successively adding or removing vertices to or from one or other of the subsets. Thus $$\pi(\mathcal{G}^{(R_1,R_2)})/\{\pi(\mathcal{G}^{(R_1)})\pi(\mathcal{G}^{(R_2)})\}$$ can depend only on $$S$$; we will denote it by $$1/\psi_S$$. □ Proof of Theorem 1. Suppose that $$\pi$$ is the density of a weakly structurally Markov graph law on $$V$$. For each $$A\subseteq V$$, let $$\phi_A=\pi(\mathcal{G}^{(A)})$$. Then by Lemmas 1 and 2,   $$ \pi(\mathcal{G}) =\frac{\prod_j \phi_{C_j}}{\prod_{j \geq 2} \psi_{S_j}}\text{.} $$ Since $$\mathcal{G}^{(\emptyset)}$$, $$\mathcal{G}^{(\{v\})}$$, $$\mathcal{G}^{(\{w\})}$$ and $$\mathcal{G}^{(\{v,w\})}$$ for distinct vertices $$v,w\in V$$ all denote the same graph, we must have $$\phi_{\{v\}}=\pi(\mathcal{G}^{(\emptyset)})$$ for all $$v$$, and also $$\psi_{\emptyset}=\pi(\mathcal{G}^{(\emptyset)})$$. Under these conditions, the constant of proportionality in (1) is evidently 1. Conversely, it is trivial to show that if the clique-separator factorization property (1) applies to $$\pi$$, then $$\pi$$ is the density of a weakly structurally Markov graph law. □ 2.5. Identifiability of parameters Byrne & Dawid (2015, Proposition 3.14) point out that their $$\{t_A(\mathcal{G})\}$$ values are subject to $$|V|+1$$ linear constraints, $$\sum_{A\subseteq V} t_A(\mathcal{G})=1$$ and $$\sum_{A\ni v} t_A(\mathcal{G})=1$$ for all $$v\in V$$, so that their parameters $$\omega_A$$, or equivalently $$\phi_A$$, are not all identifiable. They obtain identifiability by proposing a standardized vector $$\omega^\star$$, with $$|V|+1$$ necessarily zero entries, that is a linear transform of $$\omega$$. By the same token, the $$|V|+1$$ constraints on $$t_A(\mathcal{G})$$ are linear constraints on $$t^+_A(\mathcal{G})$$ and $$t^-_A(\mathcal{G})$$, and so $$\{\phi_A\}$$ and $$\{\psi_A\}$$ are not all identifiable. We could obtain identifiable parameters by, for example, choosing $$\psi_{\emptyset}=1$$ and $$\phi_{\{v\}}=1$$ for all $$v \in V$$ or, as above, by setting $$\psi_{\emptyset}=\pi(\mathcal{G}^{(\emptyset)})$$ and $$\phi_{\{v\}}=\pi(\mathcal{G}^{(\emptyset)})$$ for all $$v$$; other ways are also possible. Note in addition that $$\emptyset$$ cannot be a clique, and neither $$A=V$$ nor any subset $$A$$ of $$V$$ with $$|A|=|V|-1$$ can be a separator, so the corresponding $$\phi_A$$ and $$\psi_A$$ are never used. The dimension of the space of clique-separator factorization laws is therefore $$2 \times 2^{|V|}-2|V|-3$$, nearly twice that of clique exponential family laws, $$2^{|V|}-|V|-1$$. For example, when $$|V|=3$$, all graphs are decomposable, and all graph laws are clique-separator factorization laws, while clique exponential family laws have dimension 4; when $$|V|=4$$, 61 out of 64 graphs are decomposable, and the dimensions of the two spaces of laws are 21 and 11; when $$|V|=7$$, only 617 675 out of the $$2^{21}$$ graphs are decomposable, and the dimensions are 239 and 120. 3. Some implications for modelling and inference 3.1. Conjugacy and posterior updating As priors for the graph underlying a model $$P(X \mid \mathcal{G})$$ for data $$X$$, clique-separator factorization laws are conjugate for decomposable likelihoods, in the case where there are no unknown parameters in the distribution: given $$X$$ from the model   $$ p(X\mid\mathcal{G}) = \frac{\prod_{C\in \mathcal{C}} \lambda_C(X_C)}{\prod_{S\in \mathcal{S}} \lambda_S(X_S)} =\prod_{A\subseteq V} \lambda_A(X_A)^{t_A(\mathcal{G})}, $$ where $$\lambda_A(X_A)$$ denotes the marginal distribution of $$X_A$$, the posterior for $$\mathcal{G}$$ is   $$ p(\mathcal{G}\mid X) \propto \frac{\prod_{C\in \mathcal{C}} \phi_C\lambda_C(X_C)}{\prod_{S\in \mathcal{S}} \psi_S\lambda_S(X_S)}, $$ that is, a clique-separator factorization law with parameters $$\{\phi_A\lambda_A(x_A)\}$$ and $$\{\psi_A\lambda_A(x_A)\}$$. More generally, when there are parameters in the graph-specific likelihoods, the notions of compatibility and hypercompatibility (Byrne & Dawid, 2015) allow the extension of the idea of structural Markovianity to the joint Markovianity of the graph and the parameters, and give the form of the corresponding posterior. 3.2. Computational implications Computing posterior distributions of graphs on a large scale remains problematic, with Markov chain Monte Carlo methods seemingly the only option except for toy problems, and these methods having notoriously poor mixing. However, the junction tree sampler of Green & Thomas (2013) seems to give acceptable performance for problems of up to a few hundred variables. Posteriors induced by clique-separator factorization law priors are ideal material for these samplers, which explicitly use a clique-separator representation of all graphs and distributions. In Bornn & Caron (2011), a different Markov chain Monte Carlo sampler for clique-separator factorization laws is introduced. We have evidence that the examples shown in their figures are not representative samples from the particular models claimed, due to poor mixing. 3.3. Modelling Here we briefly discuss the way in which choices of particular forms for the parameters $$\phi_A$$ and $$\psi_A$$ govern the qualitative and even quantitative aspects of the graph law. These choices are important in designing a graph law for a particular purpose, whether or not this is prior modelling in Bayesian structural learning. A limitation of clique exponential family models is that because large clique potentials count in favour of a graph, and large separator potentials count against, it is difficult for these laws to encourage the same features in both cliques and separators. For instance, if we choose clique potentials to favour large cliques, we seem to be forced to favour small separators. A popular choice for a graph prior in past work on Bayesian structural learning is the well-known Erdős–Rényi random graph model, in which each of the $$|V|(|V|-1)/2$$ possible edges on the vertex set $$V$$ is present independently, with probability $$p$$. This model is amenable to theoretical study, but realizations of this model typically exhibit no discernible structure. When restricted to decomposable graphs, the Erdős–Rényi model is a rather extreme example of a clique exponential family law, which arises by taking $$\phi_A=\{p/(1-p)\}^{|A|(|A|-1)/2}$$. Again realizations appear unstructured, essentially because of the quadratic dependence on clique or separator size in the exponent of the potentials $$\phi_A$$. For a concrete example of a model with much more structure, suppose that our decomposable graph represents a communication network. There are two types of vertices, hubs and non-hubs. Adjacent vertices can all communicate with each other, but only hubs will relay messages. So, for a non-hub to communicate with a nonadjacent non-hub, there must be a path in the graph from one to the other where all intermediate nodes are hubs. This example has the interesting feature that using only local properties, it enforces a global property, namely universal communication. A necessary and sufficient condition for universal communication is that every separator contains a hub. This implies that either the graph is a single clique, or every clique must also contain a hub. To model this with a clique-separator factorization law, we can set the separator potential to $$\psi_S= \infty$$ if $$S$$ does not contain a hub. We are free to set the remaining values of $$\psi_S$$ and the values of the clique potentials $$\phi_C$$ for all cliques $$C$$ as we wish. In this example, these parameters are chosen to control the sizes of cliques and separators; specifically, $$\phi_C=\exp(-4|C|)$$ and $$\psi_S=\exp(-0.5|S|)$$ when $$S$$ contains a hub, which discourages both large cliques and separators containing only hubs. The graph probability $$\pi(\mathcal{G})$$ will be zero for all decomposable graphs that fail to allow universal communication, and otherwise will follow the distribution implied by the potentials. This example requires the slight generalization of Theorem 1 mentioned in Remark 1 following it. Figure 3 shows a sample from this model, generated using a junction tree sampler. Fig. 3. View largeDownload slide Simulated graph from a clique-separator factorization model, with $$\phi_C=\exp(-4|C|)$$; $$\psi_S=\exp(-0.5|S|)$$ if $$S$$ contains a hub (dark colour), and is $$\infty$$ otherwise. There are 200 vertices including 20 hubs. Fig. 3. View largeDownload slide Simulated graph from a clique-separator factorization model, with $$\phi_C=\exp(-4|C|)$$; $$\psi_S=\exp(-0.5|S|)$$ if $$S$$ contains a hub (dark colour), and is $$\infty$$ otherwise. There are 200 vertices including 20 hubs. 3.4. Significance for statistical analysis This is not the place for a comprehensive investigation of the practical implications of adopting prior models from the clique-separator factorization family in statistical analysis, something we intend to explore in later work. Instead, we extend the discussion of the example of the previous subsection to draw some lessons about inference. First we make the simple but important observation that the support of the posterior distribution of the graph cannot be greater than that of the prior. So, in the example of the hub model, the posterior will be concentrated on decomposable graphs where every separator contains a hub, and realizations will have some of the character of Fig. 3. There has been considerable interest recently in learning graphical models using methods that implicitly or explicitly favour hubs, defined in various ways with some affinity to our use of the term; see, for example, Mohan et al. (2014), Tan et al. (2014) and Zhang et al. (2017). These are often motivated by genetic applications in which hubs may be believed to correspond to genes of special significance in gene regulation. These methods usually assume that the labelling of nodes as hubs is unknown, but it is straightforward to extend our hub model to put a probability model on this labelling, and to augment the Monte Carlo posterior sampler with a move that reallocates the hub labels, using any process that maintains the presence of at least one hub in every separator. This is a strong hint of the possibility of a fully Bayesian procedure that learns graphical models with hubs. 3.5. The cost of assuming the graph is decomposable when it is not The assumption of a decomposable graph law as prior on Bayesian structural learning is of course a strong restriction. There is no reason why nature should have been kind enough to generate data from graphical models that are decomposable. However, the computational advantages of such an assumption are tremendous; see the experiments and thorough review in Jones et al. (2005). The position has not changed much since that paper was written, so far as computation of exact posteriors is concerned. However, an optimistic perspective on this conflict between prior reasonableness and computational tractability can be justified by the work of Fitch et al. (2014). For the zero-mean Gaussian case, with a hyper-inverse-Wishart prior on the concentration matrix, they conclude that asymptotically the posterior will converge to graphical structures that are minimal triangulations of the true graph, the marginal loglikelihood ratio comparing different minimal triangulations is stochastically bounded and appears to remain data-dependent regardless of the sample size, and the covariance matrices corresponding to the different minimal triangulations are essentially equivalent, so model averaging is of minimal benefit. Informally, restriction to decomposable graphs does not really matter, with the right parameter priors; we can still fit essentially the right model, though perhaps inference on the graph itself should not be over-interpreted. 4. An even weaker structural Markov property It is tempting to wonder if clique-separator factorization is equivalent to a simpler definition of weak structural Markovianity, one that places yet fewer conditional independence constraints on $$\mathfrak{G}$$; the existence of the theorem makes this possibility implausible, but it remains conceivably possible that a smaller collection of conditional independences could be equivalent. The following counterexample rules out the possibility of requiring only that   \begin{equation} \mathcal{G}_A \perp\!\!\!\perp \mathcal{G}_B \mid \{\mathcal{G} \in \mathfrak{U}^{+}(A,B)\} \quad[\mathfrak{G}], \end{equation} (2) where $$\mathfrak{U}^{+}(A,B)$$ is the set of decomposable graphs for which $$(A,B)$$ is a decomposition and $$A\cap B$$ is a clique in $$\mathcal{G}$$. Example 1. Consider graphs on vertices $$\{1,2,3,4\}$$. The only nontrivial conditional independence statements implied by property (2) arise from decompositions $$(A,B)$$ where both $$A$$ and $$B$$ have three vertices and $$A\cap B$$ has two. Suppose $$A=\{1,2,3\}=123$$ for short, and that $$B=234$$. Given that $$23$$ is a clique in $$\mathcal{G}$$, $$\mathcal{G}_A$$ may be $$\mathcal{G}^{(23)}_A$$, $$\mathcal{G}^{(12,23)}_A$$ or $$\mathcal{G}^{(13, 23)}_A$$, and similarly $$\mathcal{G}_B$$ may be $$\mathcal{G}^{(23)}_B$$, $$\mathcal{G}^{(23,24)}_B$$ or $$\mathcal{G}^{(23,34)}_B$$. These two choices are independent, by (2), and this imposes four equality constraints on the graph law. There are six different choices for the two-vertex clique $$A\cap B$$, so not more than 24 constraints overall; they may not all be independent. There are 61 decomposable graphs on four vertices, so the set of graph laws satisfying (2) has dimension at least $$60-24=36$$. But as we saw in §2.5, the set of clique-separator factorization laws has dimension $$2 \times 2^{|V|}-2|V|-3 = 21$$. Essentially, assumption (2) does not constrain the graph law sufficiently to give the explicit clique-separator factorization. In fact, it is easy to show that (2) places no constraints on $$\pi(\mathcal{G})$$ for any connected $$\mathcal{G}$$ consisting of one or two cliques. Acknowledgement We thank the editors and referees for their constructive criticisms of our manuscript, and we are also grateful to Luke Bornn, Simon Byrne, François Caron, Phil Dawid and Steffen Lauritzen for stimulating discussions and correspondence. Alun Thomas was supported in part by the National Center for Research Resources and the National Center for Advancing Translational Sciences of the U.S. National Institutes of Health. Peter Green is also a Professorial Fellow at the University of Bristol. References Bornn L. & Caron F. ( 2011). Bayesian clustering in decomposable graphs. Bayesian Anal.  6, 829– 46. Google Scholar CrossRef Search ADS   Byrne S. & Dawid A. P. ( 2015). Structural Markov graph laws for Bayesian model uncertainty.  Ann. Statist.  43, 1647– 81. Fitch A. M. , Jones M. B. & Massam H. ( 2014). The performance of covariance selection methods that consider decomposable models only. Bayesian Anal.  9, 659– 84. Google Scholar CrossRef Search ADS   Green P. J. & Thomas A. ( 2013). Sampling decomposable graphs using a Markov chain on junction trees. Biometrika  100, 91– 110. Google Scholar CrossRef Search ADS   Jones B. , Carvalho C., Dobra A., Hans C., Carter C. & West M. ( 2005). Experiments in stochastic compuation for high-dimensional graphical models. Statist. Sci.  20, 388– 400. Google Scholar CrossRef Search ADS   Lauritzen S. L. ( 1996). Graphical Models.  Oxford: Clarendon Press. Mohan K. , London P., Fazel M., Witten D. & Lee S.-I. ( 2014). Node-based learning of multiple Gaussian graphical models. J. Mach. Learn. Res.  15, 445– 88. Google Scholar PubMed  Tan K. M. , London P., Mohan K., Lee S.-I., Fazel M. & Witten D. M. ( 2014). Learning graphical models with hubs. J. Mach. Learn. Res.  15, 3297– 331. Google Scholar PubMed  Tarjan R. E. & Yannakakis M. ( 1984). Simple linear-time algorithms to test chordality of graphs, test acyclicity of hypergraphs, and selectively reduce acyclic hypergraphs. SIAM J. Comp.  13, 566– 79. Google Scholar CrossRef Search ADS   Zhang X.-F. , Ou-Yang L. & Yan H. ( 2017). Incorporating prior information into differential network analysis using non-paranormal graphical models. Bioinformatics  33, 2436– 45. Google Scholar CrossRef Search ADS PubMed  © 2017 Biometrika Trust This is an Open Access article distributed under the terms of the Creative Commons Attribution License (http://creativecommons.org/licenses/by/4.0/), which permits unrestricted reuse, distribution, and reproduction in any medium, provided the original work is properly cited.

Journal

BiometrikaOxford University Press

Published: Mar 1, 2018

There are no references for this article.

You’re reading a free preview. Subscribe to read the entire article.


DeepDyve is your
personal research library

It’s your single place to instantly
discover and read the research
that matters to you.

Enjoy affordable access to
over 18 million articles from more than
15,000 peer-reviewed journals.

All for just $49/month

Explore the DeepDyve Library

Search

Query the DeepDyve database, plus search all of PubMed and Google Scholar seamlessly

Organize

Save any article or search result from DeepDyve, PubMed, and Google Scholar... all in one place.

Access

Get unlimited, online access to over 18 million full-text articles from more than 15,000 scientific journals.

Your journals are on DeepDyve

Read from thousands of the leading scholarly journals from SpringerNature, Elsevier, Wiley-Blackwell, Oxford University Press and more.

All the latest content is available, no embargo periods.

See the journals in your area

DeepDyve

Freelancer

DeepDyve

Pro

Price

FREE

$49/month
$360/year

Save searches from
Google Scholar,
PubMed

Create lists to
organize your research

Export lists, citations

Read DeepDyve articles

Abstract access only

Unlimited access to over
18 million full-text articles

Print

20 pages / month

PDF Discount

20% off