In my dissertation, I articulate these ideas in the context of a very constrained learning problem---reinforcement learning. I explain what makes features useful, in terms of the reward values associated with different actions when those features are active. I define the importance of a feature in terms of those action values, measuring how "opinionated" the feature is regarding the values of different actions. An intelligent agent may use this kind of information to simplify its task by allocating resources and attention to important features, ignoring details of the world which have no bearing on its task.
Return to David's home page