Prev: M10 Next: M12

# M11 Past Exam Problems

📗 Enter your ID (the wisc email ID without @wisc.edu) here: and click (or hit enter key)
📗 If the questions are not generated correctly, try refresh the page using the button at the top left corner.
📗 The same ID should generate the same set of questions. Your answers are not saved when you close the browser. You could print the page: , solve the problems, then enter all your answers at the end.
📗 Please do not refresh the page: your answers will not be saved.

# Warning: please enter your ID before you start!


# Question 1


📗

# Question 2


📗

# Question 3


📗

# Question 4


📗

# Question 5


📗

# Question 6


📗

# Question 7


📗

# Question 8


📗

# Question 9


📗

# Question 10


📗

# Question 11


📗

# Question 12


📗

# Question 13


📗

# Question 14


📗

# Question 15


📗

# Question 16


📗

# Question 17


📗

# Question 18


📗

# Question 19


📗

# Question 20


📗

# Question 21


📗

# Question 22


📗

# Question 23


📗

# Question 24


📗

# Question 25


📗


📗 [4 points] In a problem where each example has real-valued attributes (i.e. features), where each attribute can be split at possible thresholds (i.e. binary splits), to select the best attribute for a decision tree node at depth , where the root is at depth 0, how many conditional entropies must be calculated (at most)?
📗 Answer: .
📗 [3 points] Consider a training set with 8 items. The first dimension of their feature vectors are: . However, this dimension is continuous (i.e. it is a real number). To build a decision tree, one may ask questions in the form "Is \(x_{1} \geq \theta\)"? where \(\theta\) is a threshold value. Ideally, what is the maximum number of different \(\theta\) values we should consider for the first dimension \(x_{1}\)? Count the values of \(\theta\) such that all instances belong to one class. 

📗 Answer: .
📗 [3 points] A decision tree has depth \(d\) = (a decision tree where the root is a leaf node has \(d\) = 0). All its internal node have \(b\) = children. The tree is also complete, meaning all leaf nodes are at depth \(d\). If we require each leaf node to contain at least training examples, what is the minimum size of the training set?
📗 Answer: .
📗 [3 points] Consider a -dimensional feature space where each feature takes integer value from 0 to (including 0 and ). What is the smallest and largest distance between the two distinct (non-overlapping) points in the feature space?
📗 Answer (comma separated vector): .
📗 [3 points] Suppose there is a single integer input \(x\) = {\(0\), \(1\), ..., }, and the label is binary \(y\) = {\(0\), \(1\)}. Let \(\mathcal{H}\) be a hypothesis space containing all possible linear classifiers. How many unique classifiers are there in \(\mathcal{H}\)? For example, the three linear classifiers \(1_{\left\{x < 0.4\right\}}\), \(1_{\left\{x \leq 0.4\right\}}\) and \(1_{\left\{x < 0.6\right\}}\) are considered the same classifier since they classify all possible data sets the same way.
📗 Answer: .
📗 [3 points] Suppose there are \(2\) discrete features \(x_{1}, x_{2}\) that can take on values and , and a binary decision tree is trained based on these features. What is the maximum number of leafs the decision tree can have?
📗 Answer: .
📗 [3 points] Given the following training set, what is the maximum accuracy of a decision tree with depth 1 trained on this set? Enter a number between 0 and 1.
index \(x_{1}\) \(y\)
1
2
3
4
5
6

📗 Answer: .
📗 [3 points] A hospital trains a decision tree to predict if any given patient has technophobia or not. The training set consists of patients. There are features. The labels are binary. The decision tree is not pruned. What are the smallest and largest possible training set accuracy of the decision tree? Enter two numbers between 0 and 1. Hint: patients with the same features may have different labels.
📗 Answer (comma separated vector): .
📗 [3 points] Given three decision stumps in a random forest in the following table, what is the predicted label for a new data point \(x\) = \(\begin{bmatrix} x_{1} & x_{2} & ... \end{bmatrix}\) = ? Enter a single number (-1 or 1; and 0 in case of a tie).
Index Decision stump -
1 Label 1 if Label -1 otherwise
2 Label 1 if Label -1 otherwise
3 Label 1 if Label -1 otherwise
4 Label 1 if Label -1 otherwise
5 Label 1 if Label -1 otherwise

📗 Answer: .
📗 [3 points] Given three decision stumps in a random forest in the following table, what is the predicted label for a new data point \(x\) = \(\begin{bmatrix} x_{1} \\ x_{2} \\ ... \end{bmatrix}\) = ? Enter a single number (-1 or 1; and 0 in case of a tie).
Index Decision stump -
1 Label 1 if Label -1 otherwise
2 Label 1 if Label -1 otherwise
3 Label 1 if Label -1 otherwise

📗 Answer: .
📗 [4 points] Given the training set below and find the label of the decision tree that achieves 100 percent accuracy. Enter \(\hat{y}_{1}, \hat{y}_{2}, \hat{y}_{3}, \hat{y}_{4}\) as a vector.
📗 The training set:
\(x_{1}\) \(x_{2}\) \(y\)
\(0\) \(0\)
\(0\) \(1\)
\(1\) \(0\)
\(1\) \(1\)

📗 The decision tree:
if \(x_{1} \leq 0.5\) if \(x_{2} \leq 0.5\) label \(\hat{y}_{1}\)
- else \(x_{2} > 0.5\) label \(\hat{y}_{2}\)
else \(x_{1} > 0.5\) if \(x_{2} \leq 0.5\) label \(\hat{y}_{3}\)
- else \(x_{2} > 0.5\) label \(\hat{y}_{4}\)

📗 Answer (comma separated vector): .
📗 [1 points] Blank.
📗 Answer: .
📗 [1 points] Blank.
📗 Answer: .
📗 [1 points] Blank.
📗 Answer: .
📗 [1 points] Blank.
📗 Answer: .
📗 [1 points] Blank.
📗 Answer: .
📗 [1 points] Blank.
📗 Answer: .
📗 [1 points] Blank.
📗 Answer: .
📗 [1 points] Blank.
📗 Answer: .
📗 [1 points] Blank.
📗 Answer: .
📗 [1 points] Blank.
📗 Answer: .
📗 [1 points] Blank.
📗 Answer: .
📗 [1 points] Blank.
📗 Answer: .
📗 [1 points] Blank.
📗 Answer: .
📗 [1 points] Blank.
📗 Answer: .

# Grade


 * * * *

 * * * * *


📗 You could save the text in the above text box to a file using the button or copy and paste it into a file yourself .
📗 You could load your answers from the text (or txt file) in the text box below using the button . The first two lines should be "##m: 11" and "##id: your id", and the format of the remaining lines should be "##1: your answer to question 1" newline "##2: your answer to question 2", etc. Please make sure that your answers are loaded correctly before submitting them.


📗 You can find videos going through the questions on Link.





Last Updated: July 03, 2024 at 12:23 PM