Learning to Reject Sequential Importance Steps for Continuous-Time Bayesian Networks
Jeremy C Weiss, Sriraam Natarajan, and David Page
When observations are incomplete, approximate inference procedures based on sequential importance sampling are often used. However, when proposal and target distributions are dissimilar, the procedures lead to biased estimates or require a prohibitive number of samples. This paper introduces a method that better approximates the target distribution by sampling variable by variable from existing importance samplers and accepting or rejecting each proposed assignment in the sequence: a choice made based on anticipating upcoming evidence. We relate the per-variable proposal and target distributions by expected weight ratios of sequence completions and show that we can learn accurate models of optimal acceptance probabilities from local samples. In a continuous-time domain, our method improves upon previous importance samplers by transforming a sequential importance sampling problem into a machine learning one.
Here is some research code for doing rejection-based importance sampling for CTBNs. Read the readme.txt to get started.
Department of Computer Science, Medicine
Advisor: David Page