# Other Materials
π Pre-recorded Videos from 2020
Lecture 5 Part 1 (Support Vector Machines):
Link
Lecture 5 Part 2 (Subgradient Descent):
Link
Lecture 5 Part 3 (Kernel Trick):
Link
Lecture 6 Part 1 (Decision Tree):
Link
Lecture 6 Part 2 (Random Forrest):
Link
Lecture 6 Part 3 (Nearest Neighbor):
Link
Lecture 7 Part 1 (Convolution):
Link
Lecture 7 Part 2 (Gradient Filters):
Link
Lecture 7 Part 3 (Computer Vision):
Link
Lecture 8 Part 1 (Computer Vision):
Link
Lecture 8 Part 2 (Viola Jones):
Link
Lecture 8 Part 3 (Convolutional Neural Net):
Link
π Relevant websites
Support Vector Machine:
Link
RBF Kernel SVM Demo:
Link
Decision Tree:
Link
Random Forrest Demo:
Link
K Nearest Neighbor:
Link
Map of Manhattan:
Link
Voronoi Diagram:
Link
KD Tree:
Link
Image Filter:
Link
Canny Edge Detection:
Link
SIFT:
PDF
HOG:
PDF
Conv Net on MNIST:
Link
Conv Net Vis:
Link
LeNet:
PDF,
Link
Google Inception Net:
PDF
CNN Architectures:
Link
Image to Image:
Link
Image segmentation:
Link
Image colorization:
Link,
Link
Image Reconstruction:
Link
Style Transfer:
Link
Move Mirror:
Link
Pose Estimation:
Link
YOLO Attack:
YouTube
π YouTube videos from 2019 and 2020
How to find the margin expression for SVM?
Link
Why does the kernel trick work?
Link
Example (Quiz): Compute SVM classifier
Link
Example (Quiz): Kernel SVM for XOR operator
Link
Example (Quiz): Kernel matrix to feature vector
Link
Example (Quiz): Entropy computation
Link
Example (Quiz): Decision tree for implication operator
Link
Example (Quiz): Three nearest neighbor
Link
How to find the HOG features?
Link
How to count the number of weights for training for a convolutional neural network (LeNet)?
Link
Example (Quiz): How to find the 2D convolution between two matrices?
Link
Example (Homework): How to find a discrete approximate Gausian filter?
Link
# Keywords and Notations
π Support Vector Machine
SVM classifier: .
Hard margin, original max-margin formulation: such that if and if .
Hard margin, simplified formulation: such that .
Soft margin, original max-margin formulation: such that , where is the slack variable for instance , is the regularization parameter.
Soft margin, simplified formulation:
Subgradient descent formula: .
π Kernel Trick
Kernel SVM classifier: , where is the feature map.
Kernal Gram matrix: .
Quadratic Kernel: has feature representation .
Gaussian RBF Kernel: has infinite-dimensional feature representation, where is the variance parameter.
π Information Theory:
Entropy: , where is the number of classes (number of possible labels), is the fraction of data points with label .
Conditional entropy: , where is the number of possible values of feature, is the fraction of data points with feature , is the fraction of data points with label among the ones with feature .
Information gain, for feature : .
π Decision Tree:
Decision stump classifier: , where is the threshold for feature .
Feature selection: .
π Convolution
Convolution (1D): , , where is the filter, and is half of the width of the filter.
Convolution (2D): , , where is the filter, and is half of the width of the filter.
Sobel filter: and .
Image gradient: , , with gradient magnitude and gradient direction .
π Convolutional Neural Network
Fully connected layer: , where is the activation unit, is the activation function.
Convolution layer: , where is the activation map.
Pooling layer: (max-pooling) , (average-pooling) .