G. Towell & J. Shavlik (1992).
Using Symbolic Learning to Improve Knowledge-Based Neural Networks. Proceedings of the Tenth National Conference on Artificial Intelligence, pp. 177-182, San Jose, CA.
This publication is available in PDF and available in postscript.
The previously-reported KBANN system integrates existing knowledge into neural networks by defining the network topology and setting initial link weights. Standard neural learning techniques can then be used to train such networks, thereby refining the information upon which the network is based. However, standard neural learning techniques are reputed to have difficulty training networks with multiple layers of hidden units; KBANN commonly creates such networks. In addition, standard neural learning techniques ignore some of the information contained in the networks created by KBANN. This paper describes a symbolic inductive learning algorithm for training such networks that uses this previously-ignored information and which helps to address the problems of training ``deep'' networks. Empirical evidence shows that this method improves not only learning speed, but also the ability of networks to generalize correctly to testing examples.
Computer Sciences Department
College of Letters and Science
University of Wisconsin - Madison
INFORMATION ~ PEOPLE ~ GRADS ~ UNDERGRADS ~ RESEARCH ~ RESOURCES
5355a Computer Sciences and Statistics ~ 1210 West Dayton Street, Madison, WI 53706
email@example.com ~ voice: 608-262-1204 ~ fax: 608-262-9777