Human, Animal, and Machine Learning: Experiment and Theory
Fridays 3:45 p.m. - 5 p.m., the Berkowitz room (338 psychology)
Feb. 6 Jeff Johnson
An overview of the Dynamic Field: Theory framework and its applications to visual cognition (with some discussion of other applications)
Feb. 13 Vanessa Simmering
Vanessa will present how she has applied this framework to research on the development of visual working memory capacity.
Feb 20 Jerry Zhu, A prospectus for research in human semi-supervised learning.
Feb 27 Rick Jenison, Economic value coding by single neurons in the human amygdala
Neuroeconomics studies the computations that the brain carries out in
order to make value-based decisions, as well as the neural
implementation of those computations. I'll talk about our ongoing work
to decode value signals from single neuron activity recorded from
patients undergoing surgery for intractable epilepsy.
March 6: Rob Nowak, Applying active learning to signal detection with sparse data.
March 13: Tim Rogers, A simple model of active vision for object recognition.
March 20: Spring Break!
March 27: No meeting
April 3: Lisa Torrey, Reinforcement Learning in Machines and Brains
I'll introduce what the reinforcement learning problem is and give an
overview of algorithmic solutions to it. Then I'll go into more depth on
one popular algorithm, Q-learning. I'll show how it works, how it can be
extended to work better, and what the challenges are in using it. Finally,
I'll link to psychology by discussing some research on human reinforcement
April 10: Michael Coen, Learning from Games: Inductive Bias and Bayesian Inference
A classic problem in understanding human intelligence is determining how
people do inductive inference when presented with small amounts of data.
I examine this question in the context of the guess-the-next-number
game, where players are presented with short series of numbers and asked
to guess the next one in the sequence. For example, I show you the
sequence <1,2,4,x> and ask you what x is? What answer do you select and
This work uses a novel, general approach employing a stochastic context
free grammar to model the operations that generate a given sequence.
The individual probabilities in the grammar are learned using Gibbs
sampling by observing people solve numerous instances of this game.
They thereby capture the mathematical inductive biases of our sample
I demonstrate this framework successfully predicts human performance on
new sequence guessing problems. I also show how our results confirm a
large body of psychological research about how people do math "in their
heads," while providing evidence against several popular theories of
descriptive parsimony popular in the machine learning community.
Finally, I briefly examine applications of this framework to other
domains, including predicting website exploration by modeling the
inductive biases that guide web surfing.
April 17: No meeting, Partha Niyogi talk at MALBEC.
April 24: Stark Draper, Information theory: A tutorial overview (Deferred to next semester)
May 1: Daragh Sibly. (room 259 on the 2nd floor of the Educational Sciences Building)
I'll discuss some computational cognitive models that
simulate how learning to read might be affected if you've previously
learned to speak in a different dialect. This is a very real
situation, given that many children learn to speak in African American
English prior to learning to read in Standard American English. I'll
present computational models that suggest why this would make it
harder to learn to read, and how we might mitigate this effect.
May 8: Steve Paulson, producer of "To the best of our knowledge," will be talking about
his meetings with several prominent cognitive scientists.
HAMLET mailing list
The MALBEC lectures ("Mathematics, Algorithms, Learning, Brains, Engineering, Computers")
Fall 2008 archive
Contact: Tim Rogers (firstname.lastname@example.org), Jerry Zhu (email@example.com) (Add 'u' to the addresses)
(Adapted from xkcd.com)