Prev: M11 Next: M13

# M12 Past Exam Problems

📗 Enter your ID (the wisc email ID without @wisc.edu) here: and click (or hit enter key)
📗 If the questions are not generated correctly, try refresh the page using the button at the top left corner.
📗 The same ID should generate the same set of questions. Your answers are not saved when you close the browser. You could print the page: , solve the problems, then enter all your answers at the end.
📗 Please do not refresh the page: your answers will not be saved.

# Warning: please enter your ID before you start!


# Question 1


📗

# Question 2


📗

# Question 3


📗

# Question 4


📗

# Question 5


📗

# Question 6


📗

# Question 7


📗

# Question 8


📗

# Question 9


📗

# Question 10


📗

# Question 11


📗

# Question 12


📗

# Question 13


📗

# Question 14


📗

# Question 15


📗

# Question 16


📗

# Question 17


📗

# Question 18


📗

# Question 19


📗

# Question 20


📗

# Question 21


📗

# Question 22


📗

# Question 23


📗

# Question 24


📗

# Question 25


📗


📗 [3 points] A hard margin support vector machine (SVM) is trained on the following dataset. Suppose we restrict \(b\) = , what is the value of \(w\)? Enter a single number, i.e. do not include \(b\). Assume the SVM classifier is \(1_{\left\{w x + b \geq 0\right\}}\).
\(x_{i}\)
\(y_{i}\)

📗 Answer: .
📗 [2 points] Given a weight vector \(w\) = , consider the line (plane) defined by \(w^\top x = c\) = . Along this line (on the plane), there is a point that is the closest to the origin. How far is that point to the origin in Euclidean distance?

📗 Note: the distance between the point and plane is the length of the red line in the diagram, the length of the blue line is \(\dfrac{c}{w_{z}}\), not the distance between the point and plane.
📗 Answer: .
[3 points] Suppose the only two support vectors in a data set is with label and with label , what is the margin of a hard-margin SVM (support vector machine) trained on this data set.
📗 Answer: .
📗 [4 points] Consider a linear model \(a_{i} = w^\top x_{i} + b\), with the hinge cost function . The initial weight is \(\begin{bmatrix} w \\ b \end{bmatrix}\) = . What is the updated weight and bias after one stochastic (sub)gradient descent step if the chosen training data is \(x\) = , \(y\) = ? The learning rate is .
📗 Answer (comma separated vector): .
📗 [4 points] Given the number of instances in each class summarized in the following table, how many instances are used to train an one-vs-one SVM (Support Vector Machine) for class vs ?
\(y_{i}\) 0 1 2 3 4
Count

📗 Answer: .
📗 [4 points] Given a linear SVM (Support Vector Machine) that perfectly classifies a set of training data containing positive examples and negative examples. What is the maximum possible number of training examples that could be removed and still produce the exact same SVM as derived for the original training set?
📗 Answer: .
📗 [4 points] Given a linear SVM (Support Vector Machine) that perfectly classifies a set of training data containing positive examples and negative examples. What is the minimum possible number of training examples that need be removed to cause the margin of a linear SVM to increase? If the answer is impossible, enter "-1".
📗 Answer: .
📗 [4 points] Given a linear SVM (Support Vector Machine) that perfectly classifies a set of training data containing positive examples and negative examples with 2 support vectors. After adding one more positively labeled training example and retraining the SVM, what is the maximum possible number of support vectors possible in the new SVM.
📗 Answer: .
📗 [4 points] Given a linear SVM (Support Vector Machine) that perfectly classifies a set of training data containing positive examples and negative examples with 2 support vectors. After adding one more positively labeled training example and retraining the SVM, what is the maximum possible number of support vectors possible in the new SVM.
📗 Answer: .
📗 [3 points] Suppose a soft margin support vector machine is trained on two points, \(x_{1}\) = , \(y_{1}\) = and \(x_{2}\) = , \(y_{2}\) = . Given the regularization parameter \(\lambda\) = , what is the soft margin loss at \(w\) = and \(b\) = ? Use \(C = \dfrac{\lambda}{2} w^\top w + \dfrac{1}{n} \displaystyle\sum_{i=1}^{n} \displaystyle\max\left\{0, 1 - \left(2 y_{i} - 1\right)\left(w^\top x_{i} + b\right)\right\}\).
📗 Answer: .
📗 [2 points] What are the smallest and largest values of subderivatives of at \(x = 0\).
📗 Answer (comma separated vector): .
📗 [4 points] Given the following training set, add one instance \(\begin{bmatrix} x_{1} \\ x_{2} \end{bmatrix}\) with \(y\) = so that all instances are support vectors for the Hard Margin SVM (Support Vector Machine) trained on the new training set.
\(x_{1}\) \(x_{2}\) \(y\)
0
0
0
1
1
1

📗 Answer (comma separated vector): .
📗 [4 points] Given the following training set, add one instance \(\begin{bmatrix} x_{1} \\ x_{2} \end{bmatrix}\) with \(y\) = so that all instances are support vectors for the Hard Margin SVM (Support Vector Machine) trained on the new training set.
\(x_{1}\) \(x_{2}\) \(y\)
0
0
0
1
1
1


📗 Note: in the diagram, currently, the two support vectors are connected by the grey line and the black line represents the SVM classification boundary. After adding one point, you should be able to make all seven points support vectors with the classification boundary given by the green line.
📗 Answer (comma separated vector): .
📗 [2 points] Suppose an SVM (Support Vector Machine) has \(w\) = and \(b\) = . What is the actual distance between the two planes defined by \(w^\top x + b = -2\) and \(w^\top x + b = 2\). Note that these are not the SVM plus and minus planes.
📗 Answer: .
📗 [2 points] Suppose an SVM (Support Vector Machine) has \(w\) = and \(b\) = . What is the actual distance between the two planes defined by \(w^\top x + b = -1\) and \(w^\top x + b = 1\).
📗 Answer: .
📗 [2 points] Suppose an SVM (Support Vector Machine) has \(w\) = and \(b\) = . What is the actual distance between the two planes defined by \(w^\top x + b = -1\) and \(w^\top x + b = 1\).

📗 Note: the distance between the two planes is the length of the red line in the diagram, the blue line does not represent the distance between the planes. You may have to rotate the diagram to see.
📗 Answer: .
📗 [3 points] A linear SVM (Support Vector Machine) has \(w\) = and \(b\) = . Which of the following points is predicted positive (label 1)?
📗 Choices:





None of the above
📗 Calculator: .
📗 [1 points] Move the plus (blue) and minus (red) planes so that they separate the two classes and the margin is maximized.

Plane: 0
Margin: 0


📗 [4 points] Consider the linear SVM (Support Vector Machine) problem without slack variables or kernels: this is known as the hard margin SVM. If you give it a linearly separable training data set where \(\begin{bmatrix} x_{1} \\ x_{2} \end{bmatrix} \in \mathbb{R}^{2}\) and \(y \in \left\{0, 1\right\}\), it will learn a line in \(\mathbb{R}^{2}\). Tom did something to your data set, and hard margin SVM no longer works (no longer linearly separable) on the modified data set: \(\begin{bmatrix} x_{1} \\ x_{2} \end{bmatrix} \leftarrow \begin{bmatrix} m_{11} & m_{12} \\ m_{21} & m_{22} \end{bmatrix} \begin{bmatrix} x_{1} \\ x_{2} \end{bmatrix} + \begin{bmatrix} b_{1} \\ b_{2} \end{bmatrix} = M \begin{bmatrix} x_{1} \\ x_{2} \end{bmatrix} + b\). Suppose \(b\) = , give an example of \(M\)?

📗 Note: you can test your transformation using : the original points are on the left and the new points after the transformation are on the right.
📗 Answer (matrix with multiple lines, each line is a comma separated vector): .
📗 [6 points] A linear SVM (Support Vector Machine) with with weights \(w_{1}, w_{2}, b\) is trained on the following data set: \(x_{1}\) = , \(y_{1}\) = and \(x_{2}\) = , \(y_{2}\) = . The attributes (i.e. features) are two dimensional \(\left(x_{i1}, x_{i2}\right)\) and the label \(y_{i}\) is binary. The classification rule is \(\hat{y}_{i} = 1_{\left\{w_{1} x_{i1} + w_{2} x_{i2} + b \geq 0\right\}}\). Assuming \(b\) = , what is \(\left(w_{1}, w_{2}\right)\) ?
📗 Answer (comma separated vector): .
📗 [6 points] A linear SVM (Support Vector Machine) with with weights \(w_{1}, w_{2}, b\) is trained on the following data set: \(x_{1}\) = , \(y_{1}\) = and \(x_{2}\) = , \(y_{2}\) = . The attributes (i.e. features) are two dimensional \(\left(x_{i1}, x_{i2}\right)\) and the label \(y_{i}\) is binary. The classification rule is \(\hat{y}_{i} = 1_{\left\{w_{1} x_{i1} + w_{2} x_{i2} + b \geq 0\right\}}\). Assuming \(b\) = , what is \(\left(w_{1}, w_{2}\right)\) ? The drawing is not graded.


📗 Answer (comma separated vector): .
📗 [4 points] Suppose the only three support vectors in a data set is with label and with label and \(x\) with label , let the margin (the distance between the plus and minus planes) be . What is \(x\)? If there are multiple possible values, enter one of them, if there are none, enter \(-1, -1\).
📗 Answer (comma separated vector): .
📗 [2 points] Consider a small dataset with \(n\) points, where each point is in a dimensional space. For which values of \(n\), there exists a dataset such that, no matter what binary label we give to each point, a linear SVM (Support Vector Machine) can perfectly classify the resulting dataset.
📗 Choices:





None of the above
📗 [1 points] Blank.
📗 Answer: .
📗 [1 points] Blank.
📗 Answer: .

# Grade


 * * * *

 * * * * *


📗 You could save the text in the above text box to a file using the button or copy and paste it into a file yourself .
📗 You could load your answers from the text (or txt file) in the text box below using the button . The first two lines should be "##m: 12" and "##id: your id", and the format of the remaining lines should be "##1: your answer to question 1" newline "##2: your answer to question 2", etc. Please make sure that your answers are loaded correctly before submitting them.


📗 You can find videos going through the questions on Link.





Last Updated: July 03, 2024 at 12:23 PM