Previous: M1, Next: M3

Back to week 1 page: Link

Official Due Date: May 24

# Written (Math) Problems

📗 Enter your ID (the wisc email ID without @wisc.edu) here: and click
📗 The same ID should generate the same set of questions. Your answers are not saved when you close the browser. You could print the page: , solve the problems, then enter all your answers at the end.
📗 Some of the referenced past exams can be found in on Professor Zhu's and Professor Dyer's websites: Link and Link.
📗 Please do not refresh the page: your answers will not be saved. You can save and load your answers (only fill-in-the-blank questions) using the buttons at the bottom of the page.
📗 Please report any bugs on Piazza.

# Warning: please enter your ID before you start!


# Question 1 [3]

📗 (Fall 2017 Final Q7, Fall 2014 Midterm Q17, Fall 2013 Final Q10) We use gradient descent to find the minimum of the function \(f\left(x\right)\) = with step size \(\eta > 0\). If we start from the point \(x_{0}\) = , how small should \(\eta\) be so we make progress in each iteration? Check all values of \(\eta\) that make progress.
📗 Hint: the minimum is 0, so "making progress" means getting closer to 0 in at least the first iteration.
📗 Choices:





None of the above
📗 Calculator: .

# Question 2 [3]

📗 (Fall 2017 Final Q15, Fall 2010 Final Q5) Let \(x = \left(x_{1}, x_{2}, x_{3}\right)\). We want to minimize the objective function \(f\left(x\right)\) = using gradient descent. Let the stepsize \(\eta\) = . If we start at the vector \(x^{\left(0\right)}\) = , what is the next vector \(x^{\left(1\right)}\) produced by gradient descent?
📗 Answer (comma separated vector): .

# Question 3 [2]

📗 (Fall 2017 Final Q23) Consider a rectified linear unit (ReLU) with input \(x\) and a bias term. The output can be written as \(y\) = . Here, the weight is and the bias is . Write down the input value \(x\) that produces a specific output \(y\) = .
📗 Answer: .

# Question 4 [4]

📗 (Spring 2017 Final Q3, Spring 2018 Final Q7) Consider a Linear Threshold Unit (LTU) perceptron with initial weights and bias . Given a new input and . Let the learning rate be , compute the updated weights, :
📗 Answer (comma separated vector): .

# Question 5 [3]

📗 (Fall 2016 Final Q15, Fall 2011 Midterm Q11) Let \(f\left(z\right) = \dfrac{1}{1 + \exp\left(-z\right)}, z = w^\top x = w_{1} x_{1} + w_{2} x_{2} + ... + w_{d} x_{d}\), \(d\) = be a sigmoid perceptron with inputs \(x_{1} = ... = x_{d}\) = and weights \(w_{1} = ... = w_{d}\) = . There is no constant bias input of one. If the desired output is \(y\) = , and the sigmoid perceptron update rule has a learning rate of , what will happen after one step of update? Each \(w_{i}\) will change by (enter a number):
📗 Hint: the change should be \(-\alpha \left(a - y\right) x\), do not forget the minus sign in front.
📗 Answer: .

# Question 6 [3]

📗 (Fall 2014 Final Q4) Which functions are (weakly) convex on \(\mathbb{R}\)?
📗 Hint: either plot the functions or find the ones with non-negative second derivative (i.e. positive semi-definite Hessian matrix in higher dimensions).
📗 Choices:





None of the above

# Question 7 [2]

📗 (Fall 2012 Final Q8) Consider a single sigmoid perceptron with bias weight \(w_{0}\) = , a single input \(x_{1}\) with weight \(w_{1}\) = , and the sigmoid activation function \(g\left(z\right) = \dfrac{1}{1 + \exp\left(-z\right)}\). For what input \(x_{1}\) does the perceptron output value \(a\) = .
📗 Minor update to the notation: \(z = w_{0} + w_{1} x_{1}\) is the linear part, and \(a\) is the target output (or activation in the lectures).
📗 Answer: .

# Question 8 [6]

📗 (Fall 2011 Midterm Q10, Spring 2018 Final Q4) With a linear threshold unit perceptron, implement the following function. That is, you should write down the weights \(w_{0}, w_{A}, w_{B}\). Enter the bias first, then the weights on A and B.
A B function
0 0
0 1
1 0
1 1

📗 Answer (comma separated vector): .

# Question 9 [2]

📗 Play the "which face is real" game: Link a few times and enter your score as a percentage. Discuss how to distinguish real and fake faces on Piazza: Link.
📗 My accuracy is and I have participated in the discussion on Piazza.

# Question 10 [1 points]

📗 Please enter any comments and suggestions including possible mistakes and bugs with the questions and the auto-grading, and materials relevant to solving the questions that you think are not covered well during the lectures. If you have no comments, please enter "None": do not leave it blank.
📗 Answer: .

# Grade


 ***** ***** ***** ***** ***** 

 ***** ***** ***** ***** ***** 

📗 Please copy and paste the text between the *****s (not including the *****s) and submit it on Canvas, M2.
📗 You could save the text as text file using the button or just copy and paste it into a text file.
📗 Warning: the load button does not function properly for all questions, please recheck everything after you load. You could load your answers using the button from the text field:








Last Updated: November 09, 2021 at 12:30 AM