Workshop on Machine Learning Meets Human Learning
held at NIPS 2008, Whistler, Canada
December 12th, 2008

Description

Can statistical machine learning theories and algorithms help explain human learning? Broadly speaking, machine learning studies the fundamental laws that govern all learning processes, including both artificial systems (e.g., computers) and natural systems (e.g., humans). It has long been understood that theories and algorithms from machine learning are relevant to understanding aspects of human learning. For example, hierarchical Bayesian models provide a way to understand how people could maintain uncertainty at different levels of abstraction; neural networks have been a valuable tool for psychologists as a computational model of the way brains learn; reinforcement learning agrees well with the neural activity of dopaminergic neurons during reward-based learning; and sparse representations in computer vision predict well the visual features found in the early visual cortex. Human cognition also carries potential lessons for machine learning research, since people still learn languages, concepts, and causal relationships from far less data than any automated system. There is a rich opportunity to develop a general theory of learning which covers both machines and humans, with the potential to deepen our understanding of human cognition and to take insights from human learning to improve machine learning systems.

This workshop will consist of invited talks and contributed posters. The goal is to bring together the different communities that study machine learning, cognitive science, neuroscience and educational science. First, we seek to provide researchers with a common grounding in the study of learning, by translating different disciplines' proprietary knowledge, specialized methods, assumptions, goals into shared terminologies and problem formulation. Second, we will investigate the value of advanced machine learning theories and algorithms as computational models for certain human learning behaviors, including, but not limited to, the role of prior knowledge, learning from labeled and unlabeled data, learning from active queries, and so on. Finally, we wish to explore the insights from the cognitive study of human learning to inspire novel machine learning theories and algorithms. It is our hope that the NIPS workshop will provide a venue for cross-pollination of machine learning approaches and cognitive theories of learning to spur further advances in both areas.

Talks

Posters

Workshop Program

The 1-day workshop consists of invited talks, poster sessions, and discussions. See the workshop program in PDF.

Call for Poster Contributions

We invite poster submissions on all topics at the interface of machine learning and human learning. Please submit a 200-word to one-page extended abstract via email to Xiaojin Zhu (jerryzhu@cs.wisc.edu). The abstract must be in either plain text or PDF. Please include "NIPS Workshop Abstract" in the subject of your email.

Important Dates

Organizers

BIBLIOGRAPHY

  1. Behrens TE, Woolrich MW, Walton ME, & Rushworth MF. Learning the value of information in an uncertain world. Nature Neuroscience 10:1214-1221. 2007.
  2. Camerer, C.F. Behavioral Game Theory. Princeton University Press. 2003.
  3. Castro, R., Kalish, C., Nowak, R., Qian R., Rogers T., & Zhu X. Human active learning. In Advances in Neural Information Processing Systems (NIPS) 22, 2008.
  4. Chater, N. & Oaksford, M. eds. The Probabilistic Mind: Prospects for Rational Models of Cognition, Oxford: Oxford University Press. 2008.
  5. Chater, N., Tenenbaum, J.B., & Yuille, A. Probabilistic models of cognition: Conceptual foundations. Trends in Cognitive Sciences 10(7):287--291. Elsevier. 2006.
  6. Dayan, P., & Daw, N.D. Decision theory, reinforcement learning, and the brain. in press, Cognitive, Affective, and Behavioral Neuroscience. 2008.
  7. Douglas, R. & Sejnowski, T. Future Challenges for the Science and Engineering of Learning. National Science Foundation Workshop Report. 2007.
  8. Griffiths, T.L., Kemp, C., & Tenenbaum, J.B. Bayesian models of cognition. Cambridge Handbook of Computational Cognitive Modeling. Cambridge University Press. 2008.
  9. Griffiths, T.L., Sanborn, A. N., Canini, K. R., & Navarro, D. J. Categorization as nonparametric Bayesian density estimation. In M. Oaksford and N. Chater (Eds.). The probabilistic mind: Prospects for rational models of cognition. Oxford: Oxford University Press. 2008
  10. Langley, P. Intelligent behavior in humans and machines. Technical report, Computational Learning Laboratory, CSLI, Stanford University, 2006.
  11. Mitchell, T. The discipline of machine learning. Technical Report CMU-ML-06-108, Carnegie Mellon University, 2006.
  12. Sanborn, A. N., & Griffiths, T. L. Markov chain Monte Carlo with people. NIPS 20, 2008.
  13. Tenenbaum, J.B., Griffiths, T.L., & Kemp, C. Theory-based Bayesian models of inductive learning and reasoning. Trends in Cognitive Sciences 10(7):309--318. 2006.
  14. Zhu, X, Rogers, T., Qian, R., & Kalish, C. Humans perform semi-supervised classification too. AAAI, 2007.