Previous: M3, Next: M5

Back to week 3 page: Link

Official Due Date: June 7

# Written (Math) Problems

📗 Enter your ID here: and click
📗 The same ID should generate the same set of questions. Your answers are not saved when you close the browser. You could print the page: , solve the problems, then enter all your answers at the end.
📗 Some of the referenced past exams can be found in on Professor Zhu's and Professor Dyer's websites: Link and Link.
📗 Please do not refresh the page: your answers will not be saved. You can save and load your answers (only fill-in-the-blank questions) using the buttons at the bottom of the page.
📗 Please report any bugs on Piazza.

# Warning: please enter your ID before you start!


# Question 1 [3 points]

📗 (Fall 2014 Midterm Q12, Fall 2013 Final Q4, Spring 2017 Final Q2) A linear SVM has \(w\) = and \(b\) = . Which of the following points is predicted positive (label 1)?
📗 Choices:





None of the above
📗 Calculator: .

# Question 2 [2 points]

📗 (Fall 2014 Midterm Q14) Suppose an SVM has \(w\) = and \(b\) = . What is the actual distance between the two planes defined by \(w^\top x + b = -1\) and \(w^\top x + b = 1\).
📗 Answer: .

# Question 3 [4 points]

📗 (Fall 2014 Midterm Q15, Fall 2013 Final Q7, Fall 2011 Midterm Q9) If \(K\left(x, x'\right)\) is a kernel with induced feature representation \(\varphi\left(x\right)\) = , and \(G\left(x, x'\right)\) is another kernel with induced feature representation \(\theta\left(x\right)\) = , then it is known that \(H\left(x, x'\right) = a K\left(x, x'\right) + b G\left(x, x'\right)\), \(a\) = , \(b\) = is also a kernel. What is the induced feature representation of \(H\) for this \(x\)?
📗 Hint: Fall 2014 Midterm Q15 gives you the formula: basically summing two kernels is equivalent to concatenating the corresponding feature vectors, but please try to convince yourself this is correct (see the last quiz question for Lecture 5).
📗 Answer (comma separated vector): .

# Question 4 [3 points]

📗 (Fall 2014 Midterm Q13, Fall 2012 Final Q7) Recall a linear SVM with slack variables has the objective function \(\dfrac{1}{2} w^\top w + C \displaystyle\sum_{i=1}^{n} \varepsilon_{i}\). What is the optimal \(w\) when the trade-off parameter \(C\) is 0? The training data contains only points with label 0 and with label 1. Only enter the weights, no bias.
📗 Answer (comma separated vector): .

# Question 5 [2 points]

📗 (Fall 2010 Final Q14) Consider a small dataset with \(n\) points, where each point in a dimensional space. For which values of \(n\), there exists a dataset such that, no matter what binary label we give to each point, a linear SVM can perfectly classify the resulting dataset.
📗 Hint: the largest such \(n\) is called the Vapnik-Chervonenkis (VC) dimension. The VC dimension for linear classifiers (for example SVM) is the dimension of the space plus 1. The following is an example in 2D with 3 points: no matter what binary label we give to each point, a line can always separate the two classes: note that it is not the case with 4 points (remember the XOR example).

📗 Choices:





None of the above

# Question 6 [2 points]

📗 (Fall 2011 Midterm Q7) Given a weight vector \(w\) = , consider the line defined by \(w^\top x\) = . Along this line, there is a point that is the closest to the origin. How far is that point to the origin in Euclidean distance?
📗 Answer: .

# Question 7 [2 points]

📗 (Fall 2011 Midterm Q8, Fall 2009 Final Q1) Let \(w\) = and \(b\) = . For the point \(x\) = , \(y\) = , what is the smallest slack value \(\xi\) for it to satisfy the margin constraint?
📗 Answer: .

# Question 8 [6 points]

📗 (Fall 2019 Final Q10) A linear SVM with with weights \(w_{1}, w_{2}, b\) is trained on the following data set: \(x_{1}\) = , \(y_{1}\) = and \(x_{2}\) = , \(y_{2}\) = . The attributes are two dimensional \(\left(x_{1}, x_{2}\right)\) and the label \(y\) is binary. The classification rule is \(\hat{y} = 1_{\left\{w_{1} x_{1} + w_{2} x_{2} + b \geq 0\right\}}\). Assuming \(b\) = , what is \(\left(w_{1}, w_{2}\right)\)?
📗 Hint: draw the line then figure out its equation using one point on the line and its slope.


📗 Answer (comma separated vector): .

# Question 9 [2 points]

📗 This is survey question. What is the highest numbered programming course you took (CS300, CS320, CS400, etc) and do you think that course prepared you for P1? If not, please explain.
📗 Answer: .

# Question 10 [1 points]

📗 Please enter any comments and suggestions including possible mistakes and bugs with the questions and the auto-grading, and materials relevant to solving the questions that you think are not covered well during the lectures. If you have no comments, please enter "None": do not leave it blank.
📗 Answer: .

# Grade


 ***** ***** ***** ***** ***** 

 ***** ***** ***** ***** ***** 

📗 Please copy and paste the text between the *****s (not including the *****s) and submit it on Canvas, M4.
📗 You could save the text as text file using the button or just copy and paste it into a text file.
📗 Warning: the load button does not function properly for all questions, please recheck everything after you load. You could load your answers using the button from the text field:









Last Updated: November 09, 2021 at 12:30 AM