Get 20M+ Full-Text Papers For Less Than $1.50/day. Start a 14-Day Trial for You or Your Team.

Learn More →

Learning Markov logic network structure via hypergraph lifting

Learning Markov logic network structure via hypergraph lifting Learning Markov Logic Network Structure via Hypergraph Lifting Stanley Kok koks@cs.washington.edu Pedro Domingos pedrod@cs.washington.edu Department of Computer Science & Engineering, University of Washington, Seattle, WA 98195, USA Abstract Markov logic networks (MLNs) combine logic and probability by attaching weights to rst-order clauses, and viewing these as templates for features of Markov networks. Learning MLN structure from a relational database involves learning the clauses and weights. The state-of-the-art MLN structure learners all involve some element of greedily generating candidate clauses, and are susceptible to local optima. To address this problem, we present an approach that directly utilizes the data in constructing candidates. A relational database can be viewed as a hypergraph with constants as nodes and relations as hyperedges. We nd paths of true ground atoms in the hypergraph that are connected via their arguments. To make this tractable (there are exponentially many paths in the hypergraph), we lift the hypergraph by jointly clustering the constants to form higherlevel concepts, and nd paths in it. We variabilize the ground atoms in each path, and use them to form clauses, which are evaluated using a pseudolikelihood measure. In our experiments on three real-world datasets, we nd that our algorithm outperforms http://www.deepdyve.com/assets/images/DeepDyve-Logo-lg.png

Learning Markov logic network structure via hypergraph lifting

Association for Computing Machinery — Jun 14, 2009

Loading next page...
/lp/association-for-computing-machinery/learning-markov-logic-network-structure-via-hypergraph-lifting-by1WL30H03

References (23)

Datasource
Association for Computing Machinery
Copyright
Copyright © 2009 by ACM Inc.
ISBN
978-1-60558-516-1
doi
10.1145/1553374.1553440
Publisher site
See Article on Publisher Site

Abstract

Learning Markov Logic Network Structure via Hypergraph Lifting Stanley Kok koks@cs.washington.edu Pedro Domingos pedrod@cs.washington.edu Department of Computer Science & Engineering, University of Washington, Seattle, WA 98195, USA Abstract Markov logic networks (MLNs) combine logic and probability by attaching weights to rst-order clauses, and viewing these as templates for features of Markov networks. Learning MLN structure from a relational database involves learning the clauses and weights. The state-of-the-art MLN structure learners all involve some element of greedily generating candidate clauses, and are susceptible to local optima. To address this problem, we present an approach that directly utilizes the data in constructing candidates. A relational database can be viewed as a hypergraph with constants as nodes and relations as hyperedges. We nd paths of true ground atoms in the hypergraph that are connected via their arguments. To make this tractable (there are exponentially many paths in the hypergraph), we lift the hypergraph by jointly clustering the constants to form higherlevel concepts, and nd paths in it. We variabilize the ground atoms in each path, and use them to form clauses, which are evaluated using a pseudolikelihood measure. In our experiments on three real-world datasets, we nd that our algorithm outperforms

There are no references for this article.