Train classifier

  1. Create a training collage of positive and negative example images.
  2. Learn the epitome of the training collage.
  3. Calculate the conditional probability of each epitome patch being mapped into an image given that the image is a positive (or negative) example.
  4. Select the epitome patches that maximize the ratio
    \begin{displaymath}
\frac{P(patch_{i}=1\vert pos)}{P(patch_{i}=1\vert neg)}
\end{displaymath} (1)

  5. These patches, and their conditional probabilities of being mapped into positive and negative example images, can then be used to classify new images.

The conditional probabilities of a patch mapping into an image given that the image is a positive example is simply calculated

\begin{displaymath}
P(patch_{i}=1\vert pos) = \frac{N_i+\alpha}{N+\alpha}
\end{displaymath} (2)

where $patch_{i}=1$ means that $patch_i$ does map into the example image, $N_i$ is the number of positive example images that patch $i$ maps into, and $N$ is the total number of positive example images. $\alpha$ is a psuedocount to avoid getting zero probabilities. We say that $patch_i$ from the epitome maps into an image if $patch_i$ has the highest posterior probability among all epitome patches of mapping into an image patch for $\geq 1$ image patch. The patch mapping probability for negative example images is calculated in a similar fashion.

David Andrzejewski 2005-12-19