R. Maclin & J. Shavlik (1996).
Creating Advice-Taking Reinforcement Learners.
Machine Learning, 22, pp. 251-281.
This publication is available remotely and available in postscript.
Abstract:
Learning from reinforcements is a promising approach for creating intelligent agents. However, reinforcement learning usually requires a large number of training episodes. We present and evaluate a design that addresses this shortcoming by allowing a connectionist Q-learner to accept advice given, at any time and in a natural manner, by an external observer. In our approach, the advice-giver watches the learner and occasionally makes suggestions, expressed as instructions in a simple imperative programming language. Based on techniques from knowledge-based neural networks, we insert these programs directly into the agent's utility function. Subsequent reinforcement learning further integrates and refines the advice. We present empirical evidence that investigates several aspects of our approach and show that, given good advice, a learner can achieve statistically significant gains in expected reward. A second experiment shows that advice improves the expected reward regardless of the stage of training at which it is given, while another study demonstrates that subsequent advice can result in further gains in reward. Finally, we present experimental results that indicate our method is more powerful than a naive technique for making use of advice.
Computer Sciences Department
College of Letters and Science
University of Wisconsin - Madison
INFORMATION
~ PEOPLE
~ GRADS
~ UNDERGRADS
~ RESEARCH
~ RESOURCES
5355a Computer Sciences and Statistics ~ 1210 West Dayton Street, Madison,
WI 53706
cs@cs.wisc.edu ~ voice: 608-262-1204 ~
fax: 608-262-9777