📗 The lecture is in person, but you can join Zoom: 8:50-9:40 or 11:00-11:50. Zoom recordings can be viewed on Canvas -> Zoom -> Cloud Recordings. They will be moved to Kaltura over the weekends.
📗 The in-class (participation) quizzes should be submitted on TopHat (Code:741565), but you can submit your answers through Form at the end of the lectures too.
📗 The Python notebooks used during the lectures can also be found on: GitHub. They will be updated weekly.
(1) Question 10: dxy > 0.8 will highlight 2 pixels; dxy > 1 will highlight 0 pixels since none of the pixels are strictly larger than 1; dxy > 0.7 will highlight 3 pixels (0.75, 1, 1).
(2) Question 14: the dual problem should have constraints A' @ y >= c not A' @ y <= c: it's a typo, but it should not affect the answer.
(3) Question 16: probability of transitioning from 0 to 2 is 0, so it is impossible to observe a sequence [0, 0, 2], meaning the probability is 0.
(4) Question 17: lr.predict_proba(x) for multi-class classification returns the probabilities that y belongs to each class, here the probability x belongs to class 0 is 0.3, class 1 is 0.5 and class 2 is 0.2, so lr.predict(x) just returns the class index with the highest probability, i.e. class 1. If you selected the answer 2 and noted in Question 20 that you assumed the classes are 1, 2, 3 not 0, 1, 2, you will get the point back.
(5) Question 19: u1 is principal component 1, u2 is principal component 2, and u3 is principal component 3, and they should not be reordered. This means the reconstruction x is y1 u1 + y2 u2 + y3 u3 = -1 [0, 0, 1] + 0 [1, 0, 0] + 1 [0, 1, 0] = [0, 1, -1].
📗 Exam Coverage:
➩ 20 multiple choice questions (four choices, only one of them is the "most correct").
Ten questions are not similar to past exam questions or quiz questions, including on the new topics covered this semester:
➩ Preprocessing: text preprocessing, image preprocessing
Question 1:
➩ Classification: support vector machines, neural network
Question 2:
Question 3:
➩ Optimization: linear programming
Question 4:
Question 5:
➩ Simulation: all lectures
Question 6:
Question 7:
➩ Enter a different ID to see a different version of the questions: , and click (or hit the "Enter" key).
➩ To check your answer, click :
* * * * *
* * * * *
➩ There are no more new questions on these topics other than these seven and the ones in the weekly quizzes.
📗 Not on exam:
The following topics are NOT on Exam 3:
➩ Regex (already covered in Exam 2)
➩ HOG Features
➩ Classification methods other than logistic regression, support vector machines, and neural networks
➩ Regression methods other than linear regression
➩ LU decomposition
➩ Statistical inference (t-stats, p-value)
➩ Non-linear optimization methods other than gradient descent
➩ Numerical gradient and hessian
➩ Rand index and adjusted rand index
➩ Non-linear PCA
➩ Column space and row space (not covered this semester)
➩ Tensor operations, including broadcasting (not covered this semester)
➩ Process and thread parallelism (not covered this semester)
1,2,3,4,5,6,7q
📗 [1 points] Suppose dxy = skimage.filters.sobel(img) produces the dxy matrix in the following table. To highlight the edge pixels in the original image in green, image[dxy > t] = [0, 255, 0] is used, and pixels are highlighted. What value of t is used?
0 0 0 0
📗 [1 points] One-vs-one support vector machines are trained and produce the following the confusion matrix. How many training items are used in training the "0 vs 2" support vector machine?
Count
Predict 0
Predict 1
Predict 2
Class 0
Class 1
Class 2
0 0 0 0
📗 [1 points] The 3-fold cross validation accuracy for four different neural networks is summarized below. Which model is the most preferred one based on cross validation accuracy?
Network
Fold 1 accuracy
Fold 2 accuracy
Fold 3 accuracy
A
B
C
D
0 0 0 0
📗 [1 points] What is the optimal solution [x1, x2] to the linear program max c * x1 + x2 subject to x1 + x2 <= 1 and x1 >= 0x2 >= 0 where c = ?
0 0 0 0
📗 [1 points] Suppose the standard form of a linear program max c @ x subject to A @ x <= b and x >= 0 has len(c) = , A.shape = (, ), and len(b) = . What is the number of dual variables len(y)? Note: the dual problem is min b @ y subject to A' @ y >= c and y >= 0 where ' means transpose.
0 0 0 0
📗 [1 points] Suppose all the random vectors generated from a multivariate normal distribution are on the same line, using numpy.random.multivariate_normal([0, 0], [[a, c], [c, b]], 1000) with a = and b = . What is the value of c?
0 0 0 0
📗 [1 points] Consider a Markov chain with the following transition matrix with three states \(\left\{0, 1, 2\right\}\). What is the probability a sequence , , is observed (given it starts with )?