Relational Dependency Networks (RDNs) are graphical models that extend
dependency networks to relational domains where the joint probability
distribution over the variables is approximated as a product of
conditional distributions. This higher expressivity, however, comes at
the expense of of a more complex model-selection problem: an unbounded
number of relational abstraction levels might need to be
Whereas current learning approaches for RDNs learn a single probability tree per random variable, RDN-Boost learns a series of relational function-approximation problems using gradient-based boosting. In doing so, one can easily induce highly complex features over several iterations and in turn estimate quickly a very expressive model.
The software provided here can also learn a single relational probability tree though the main contribution is the functional gradient boosting of RDN's.
 Sriraam Natarajan, Tushar Khot, Kristian Kersting, Bernd Gutmann and Jude Shavlik Boosting Relational Dependency Networks , ILP 2010. [pdf]
 J. Neville and D. Jensen. Relational dependency networks. Introduction to Statistical Relational Learning, 2007.
 J.H. Friedman. Greedy function approximation: A gradient boosting machine. Annals of Statistics, 29, 2001.
We gratefully acknowledge the support of Defense Advanced Research Projects Agency (DARPA) Machine Reading Program under Air Force Research Laboratory (AFRL) prime contract no. FA8750-09-C-0181. Any opinions, findings, and conclusion or recommendations expressed in this material are those of the author(s) and do not necessarily reflect the view of the DARPA, AFRL, or the US government