Lisa Torrey
University of Wisconsin
Graduate student webpage


General

I earned my Ph.D. in Computer Science from the University of Wisconsin in May 2009. I now teach at St. Lawrence University. You may wish to see my more recent website there. This UW site accurately describes my academic activities as a graduate student, but I no longer update it.

While I was at the University of Wisconsin as a graduate student, from 2003 to 2009, I worked with professor Jude Shavlik.


Teaching

For two semesters I was an instructor of CS 302, the introductory programming course at UW-Madison. Here is the course website for the Spring 2004 version of that course.

I participated in the Delta Program in Research, Teaching, and Learning at UW-Madison, where I earned the Delta Certificate by completing courses and projects in teaching and learning.

Here are some miscellaneous tutorials that I developed during my time here:


Research

My research area is artificial intelligence, and more specifically machine learning.

My dissertation research was on relational transfer in reinforcement learning. Reinforcement learning is a way for artificial agents to learn to act in an environment based on rewards. Transfer learning is a way for artificial agents to apply knowledge gained in a previous task to help them learn a new task. Relational learning is a type of learning in which concepts can be expressed in first-order logic.


Here is my Ph.D. thesis:

L. Torrey. Relational Transfer in Reinforcement Learning. Ph.D. Thesis, University of Wisconsin-Madison, Computer Sciences Department, 2009.


Here are my publications in this area:

L. Torrey and J. Shavlik. Policy Transfer via Markov Logic Networks. Proceedings of the 19th Conference on Inductive Logic Programming, 2009.

L. Torrey, J. Shavlik, T. Walker and R. Maclin. Transfer Learning via Advice Taking. In J. Koronacki, S. Wirzchon, Z. Ras and J. Kacprzyk, editors, Recent Advances in Machine Learning, Springer Studies in Computational Intelligence 2009.

L. Torrey and J. Shavlik. Transfer Learning. In E. Soria, J. Martin, R. Magdalena, M. Martinez and A. Serrano, editors, Handbook of Research on Machine Learning Applications, IGI Global 2009.

L. Torrey, T. Walker, R. Maclin and J. Shavlik. Advice Taking and Transfer Learning: Naturally Inspired Extensions to Reinforcement Learning. AAAI Fall Symposium on Naturally Inspired AI, 2008.

L. Torrey, J. Shavlik, S. Natarajan, P. Kuppili and T. Walker. Transfer in Reinforcement Learning via Markov Logic Networks. AAAI Workshop on Transfer Learning for Complex Tasks, 2008.

L. Torrey, J. Shavlik, T. Walker and R. Maclin. Rule Extraction for Transfer Learning. In J. Diederich, editor, Rule Extraction from Support Vector Machines, Springer 2008.

L. Torrey, J. Shavlik, T. Walker and R. Maclin. Relational Macros for Transfer in Reinforcement Learning. Proceedings of the 17th Conference on Inductive Logic Programming, 2007.

L. Torrey, J. Shavlik, T. Walker and R. Maclin. Skill Acquisition via Transfer Learning and Advice Taking. Proceedings of the 17th European Conference on Machine Learning, 2006.

L. Torrey, J. Shavlik, T. Walker, and R. Maclin. Relational Skill Transfer via Advice Taking. ICML Workshop on Structural Knowledge Transfer for Machine Learning, 2006.

L. Torrey, T. Walker, J. Shavlik, and R. Maclin. Using Advice to Transfer Knowledge Acquired in One Reinforcement Learning Task to Another. Proceedings of the 16th European Conference on Machine Learning, 2005.

L. Torrey, T. Walker, J. Shavlik, and R. Maclin. Knowledge Transfer Via Advice Taking. Proceedings of the 3rd International Conference on Knowledge Capture, 2005.


Some underlying methods that we have been using in this research are advice-taking and knowledge-based support-vector regression. Here is some work my group has published on these methods:

R. Maclin, E. Wild, J. Shavlik, L. Torrey and T. Walker. Refining Rules Incorporated into Knowledge-Based Support Vector Learners Via Successive Linear Programming. Proceedings of the 22nd Conference on Artificial Intelligence, 2007.

R. Maclin, J. Shavlik, T. Walker and L. Torrey. A Simple and Effective Method for Incorporating Advice into Kernel Methods. Proceedings of the 21st National Conference on Artificial Intelligence, 2006.

R. Maclin, J. Shavlik, L. Torrey, T. Walker, and E. Wild. Giving Advice about Preferred Actions to Reinforcement Learners Via Knowledge-Based Kernel Regression. Proceedings of the 20th National Conference on Artificial Intelligence, 2005.

R. Maclin, J. Shavlik, L. Torrey, and T. Walker. Knowledge-Based Support Vector Regression for Reinforcement Learning. IJCAI Workshop on Reasoning, Representation, and Learning in Computer Games, 2005.


Finally, here are some miscellaneous other publications from 2003 - 2009:

T. Walker, L. Torrey, J. Shavlik, and R. Maclin. Building Relational World Models for Reinforcement Learning. Proceedings of the 17th Conference on Inductive Logic Programming, 2007.

L. Torrey, J. Coleman and B. Miller. A Comparison of Interactivity in the Linux 2.6 Scheduler and an MLFQ Scheduler. Software: Practice and Experience, 2006.

E. Robinson, L. Torrey, J. Newland, A. Theuninck, C. Nove, and R. Maclin. Analysis of Multichannel Internet Communication. Technical Report SAND2004-4905, Sandia National Laboratories, 2004.

L. Torrey. An Active Learning Approach to Efficiently Ranking Retrieval Engines. Technical Report TR2003-449, Computer Science Dept., Dartmouth College, 2003.