Using Machine Learning to Understand and Enhance Human Learning Capacity

Research Projects

The overall goal of the project is to develop computational learning models and theory, originally aimed at computers, to predict and influence human learning behaviors.

Capacity measure of the human mind

What is the VC-dimension of the human mind? In machine learning, the VC-dimension is a well-known capacity measure for a model family. What if the "model family" is the human mind, e.g., all the classifiers that one can come up with? Can we estimate such a capacity for humans? We propose a method to estimate the Rademacher complexity of the human mind in binary categorization tasks. It will tell us the intrinsic complexity of the human thinking process. It also has direct application in understanding overfitting in human learning.

Read more:

Optimal teaching

Given a task and a learner, can a teacher design an optimal teaching strategy so that the learner "gets" the true concept quickly? Recent work in the machine learning community on teaching dimension and curriculum learning starts to address this question. We are developing new computational theory and performing human behavioral experiments to advance our understanding of optimal teaching.

Read more:

Human semi-supervised learning

Human category learning is traditionally thought of as supervised learning. We demonstrated that it is in fact greatly influenced by unlabeled data, and should be modeled as semi-supervised learning. For example, after learning, just performing categorization on unlabeled test items can change the human's mind about the decision boundary.

Read more:

Human active learning

Under certain conditions, an active machine learner provably outperforms a passive learner. If we allow a human learner to submit queries and obtain oracle labels, can they do better than their peers who passively receive iid training samples? We showed that the answer is yes.

Read more:

Publications from this project


  1. Xiaojin Zhu. Machine teaching for Bayesian learners in the exponential family. In Advances in Neural Information Processing Systems (NIPS), 2013.
    [pdf | poster]

  2. Kwang-Sung Jun, Xiaojin Zhu, Burr Settles, and Timothy Rogers. Learning from Human-Generated Lists. In The 30th International Conference on Machine Learning (ICML), 2013.
    [pdf | slides | SWIRL v1.0 code | video]

  3. Bryan R. Gibson, Timothy T. Rogers, and Xiaojin Zhu. Human semi-supervised learning. Topics in Cognitive Science, 5(1):132-172, 2013.
    [link]

  4. Xiaojin Zhu. Persistent homology: An introduction and a new text representation for natural language processing. In The 23rd International Joint Conference on Artificial Intelligence (IJCAI), 2013.
    [pdf | slides | data and code ]

  5. Burr Settles and Xiaojin Zhu. Behavioral factors in interactive training of text classifiers. In North American Chapter of the Association for Computational Linguistics - Human Language Technologies (NAACL HLT). Short paper. 2012.
    [pdf]

  6. Faisal Khan, Xiaojin Zhu, and Bilge Mutlu. How do humans teach: On curriculum learning and teaching dimension. In Advances in Neural Information Processing Systems (NIPS) 25. 2011.
    [pdf | data | slides]

  7. Shilin Ding, Grace Wahba, and Xiaojin Zhu. Learning higher-order graph structure with features by structure penalty. In Advances in Neural Information Processing Systems (NIPS) 25. 2011. [pdf]

  8. Jun-Ming Xu, Xiaojin Zhu, and Timothy T. Rogers. Metric learning for estimating psychological similarities. ACM Transactions on Intelligent Systems and Technology (ACM TIST), 2011. [journal link | unofficial version | data | code]

  9. David Andrzejewski, Xiaojin Zhu, Mark Craven, and Ben Recht. A framework for incorporating general domain knowledge into Latent Dirichlet Allocation using First-Order Logic. The Twenty-Second International Joint Conference on Artificial Intelligence (IJCAI-11), 2011. [pdf | slides | poster | code]

  10. Xiaojin Zhu, Bryan Gibson, and Timothy Rogers. Co-training as a human collaboration policy. In The Twenty-Fifth Conference on Artificial Intelligence (AAAI-11), 2011. [pdf]

  11. Andrew Goldberg, Xiaojin Zhu, Alex Furger, and Jun-Ming Xu. OASIS: Online active semisupervised learning. In The Twenty-Fifth Conference on Artificial Intelligence (AAAI-11), 2011. [pdf]

  12. Chen Yu, Jun-Ming Xu, and Xiaojin Zhu. Word learning through sensorimotor child-parent interaction: A feature selection approach. The 33rd Annual Conference of the Cognitive Science Society (CogSci 2011), 2011.
    [pdf]

  13. Charles W. Kalish, Timothy T. Rogers, Jonathan Lang, and Xiaojin Zhu. Can semi-supervised learning explain incorrect beliefs about categories? Cognition, 2011. [link]

  14. Bryan Gibson, Xiaojin Zhu, Tim Rogers, Chuck Kalish, and Joseph Harrison. Humans learn using manifolds, reluctantly. In Advances in Neural Information Processing Systems (NIPS) 24, 2010. [pdf | NIPS talk slides]

  15. Andrew Goldberg, Xiaojin Zhu, Benjamin Recht, Jun-Ming Xu, and Robert Nowak. Transduction with matrix completion: Three birds with one stone. In Advances in Neural Information Processing Systems (NIPS) 24. 2010. [pdf]

  16. Xiaojin Zhu, Bryan R. Gibson, Kwang-Sung Jun, Timothy T. Rogers, Joseph Harrison, and Chuck Kalish. Cognitive models of test-item effects in human category learning. In The 27th International Conference on Machine Learning (ICML), 2010. [paper pdf]

  17. Bryan R Gibson, Kwang-Sung Jun, and Xiaojin Zhu. With a little help from the computer: Hybrid human-machine systems on bandit problems. In NIPS 2010 Workshop on Computational Social Science and the Wisdom of Crowds, 2010.
    [pdf]

Selected highlights from publications

In terms of understanding learning, we have made a number of discoveries: In terms of enhancing learning, we have made the follow progress:

Data sets for download

Code for download

Research Group

Faculty

Graduate Students

Undergraduate Students

Staff

Collaborators


Related NIPS 2008 Workshop on Machine Learning Meets Human Learning


Professor Xiaojin Zhu in Computer Sciences at the University of Wisconsin-Madison is the recipient of a 2010 Faculty Early Career Development Award (CAREER) from the National Science Foundation, a five-year grant designed to boost young faculty in establishing integrated research and educational activities while helping to address areas of important need. Zhu's CAREER project is titled "Using Machine Learning to Understand and Enhance Human Learning Capacity." His project aims to discover the common mathematical principles that govern learning in both humans and computers. Examples include rigorous generalization error bounds (how well can a student or a robot generalize what the teacher taught to new problems?), sparsity (how well can the student or robot identify a few salient features of a problem, out of a haystack of irrelevant features?), and active learning (can the student or robot ask good questions to speed up its own learning?). He expects the project will lead to novel computational approaches to enhance human learning in and out of classrooms, and advance machine learning by incorporating insights on tasks where humans excel. This project is based upon work supported by the National Science Foundation under Grant No. IIS-0953219. Any opinions, findings, and conclusions or recommendations expressed in this material are those of the authors and do not necessarily reflect the views of the National Science Foundation.