Relational Dependency Networks (RDNs)[2] are graphical models that extend
dependency networks to relational domains where the joint probability
distribution over the variables is approximated as a product of
conditional distributions. This higher expressivity, however, comes at
the expense of of a more complex model-selection problem: an unbounded
number of relational abstraction levels might need to be
explored.
Whereas current learning approaches for RDNs learn a single
probability tree per random variable, RDN-Boost[1] learns a series of
relational function-approximation problems using
gradient-based boosting[3]. In doing so, one can easily induce highly
complex features over several iterations and in turn estimate quickly
a very expressive model.
The software provided here can also learn
a single relational probability tree though the main contribution is
the functional gradient boosting of RDN's.
References
[1] Sriraam Natarajan, Tushar Khot, Kristian Kersting, Bernd Gutmann and Jude Shavlik Boosting Relational Dependency Networks , ILP 2010. [pdf]
[2] J. Neville and D. Jensen. Relational dependency
networks. Introduction to Statistical Relational Learning, 2007.
[3] J.H. Friedman. Greedy function approximation: A gradient boosting
machine. Annals of Statistics, 29, 2001.
Acknowledgements
We gratefully acknowledge the support of Defense Advanced
Research Projects Agency (DARPA) Machine Reading Program under Air Force
Research Laboratory (AFRL) prime contract no. FA8750-09-C-0181. Any
opinions, findings, and conclusion or recommendations expressed in this
material are those of the author(s) and do not necessarily reflect the
view of the DARPA, AFRL, or the US government