CS 540Lecture NotesFall 1996

Neural Networks (Chapter 19)


Main Ideas

Why Neural Nets?

Neurobiology Constraints on Human Information Processing

Perceptrons

Learning in Neural Nets

Example: Learning OR in a Perceptron

The result of executing the learning algorithm for 3 epochs:

x1x2TOdelta_w1w1delta_w2w2delta_w3w3 (=t)
-----.1-.5-.8
00000.10.50.8
01100.1.2.7-.2.6
1010.2.30.7-.2.4
11110.30.70.4
00000.30.70.4
01110.30.70.4
1010.2.50.7-.2.2
11110.50.70.2
00000.50.70.2
01110.50.70.2
10110.50.70.2
11110.50.70.2

So, the final learned network is:

Linear Separability

XOR - A Function that Can Not be Learned by a Perceptron

Perceptron Convergence Theorem

Beyond Perceptrons

Computing XOR using a 2-Layer Feedforward Network

The following network computes XOR. Notice that the left hidden unit effectively computes OR and the right hidden unit computes AND. Then the output unit outputs 1 if the OR output is 1 and the AND output is 0.

Backpropagation Learning in Feedforward Neural Nets

Computing the Gradient of E

We'll consider the problem of a 2-layer network where we must update the weights connecting nodes in the hidden layer to the output layer, and the weights connecting the nodes in the input layer to the hidden layer

Other Issues

Summary

Applications


Last modified December 5, 1996
Copyright © 1996 by Charles R. Dyer. All rights reserved.