Prev: Q12 Next: Q14
Back to week 4 page: Link

# Q13 Quiz Instruction

📗 The quizzes must be completed during the lectures and submitted on TopHat: Link. No Canvas submissions are required. The grades will be updated by the end of the week on Canvas.
📗 Please submit a regrade request if (i) you missed a few questions because you are late or have to leave during the lecture; (ii) you selected obviously incorrect answers by mistake (one or two of these shouldn't affect your grade): Link

Answer Points Out of
Correct 1 Number of Questions
Plausible but Incorrect 1 -
Obviously Incorrect 0 -


Slides: PDF

The following questions may appear as quiz questions during the lecture. If the questions are not generated correctly, try refresh the page using the button at the top left corner.


# Question 1

Code:


# Question 2

Code:


# Question 3

Code:


# Question 4

Code:


📗 [1 points] The is:
📗 A: Too easy.
📗 B: Easy.
📗 C: Just right.
📗 D: Hard.
📗 E: Too hard.
📗 [4 points] Perform hierarchical clustering with linkage in one-dimensional space on the following points: , , , , , . Break ties in distances by first combining the instances with the smallest index (appears earliest in the list). Draw the cluster tree.
📗 Note: the node \(C_{1}\) should be the first cluster formed, \(C_{2}\) should be the second and so on. All edges to point to the instances (or other clusters) that belong to the cluster.
📗 Answer: 



📗 Note: to erase an edge, draw the same edge again.
📗 [4 points] You are given the distance table. Consider the next iteration of hierarchical agglomerative clustering (another name for the hierarchical clustering method we covered in the lectures) using linkage. What will the new values be in the resulting distance table corresponding to the new clusters? If you merge two columns (rows), put the new distances in the column (row) with the smaller index. For example, if you merge columns 2 and 4, the new column 2 should contain the new distances and column 4 should be removed, i.e. the columns and rows should be in the order (1), (2 and 4), (3), (5).

\(d\) =
📗 Answer (matrix with multiple lines, each line is a comma separated vector): .
📗 [4 points] Consider the four points: \(x_{1}\) = , \(x_{2}\) = , \(x_{3}\) = , \(x_{4}\) = . Let there be two initial cluster centers \(c_{1}\) = , \(c_{2}\) = . Use Euclidean distance. Break ties in distances by putting the point in the cluster with the smaller index (i.e. favor cluster 1). If a cluster contains no points, do not move the cluster center (it stays at the initial position). Write down the cluster centers after one iteration of k-means, the first cluster center (comma separated vector) on the first line and the second cluster center (comma separated vector) on the second line.

📗 Note: the red points are the cluster centers and the other points are the training items.
Hint See Fall 2019 Midterm Q22, Spring 2018 Midterm Q7, Fall 2017 Final Q22, Spring 2017 Midterm Q5, Fall 2014 Final Q20, Fall 2013 Final Q14, Fall 2006 Final Q14, Fall 2005 Final Q14. Find which cluster each \(x_{i}\) belongs to (call it \(k_{i}\)): it's the cluster center that is the closest to the point. Compute the new cluster centers \(c'_{1}, c'_{2}\) as \(c'_{k} = \dfrac{1}{\displaystyle\sum_{k_{i} = k} 1} \displaystyle\sum_{k_{i} = k} x_{i}\).
📗 Answer (matrix with multiple lines, each line is a comma separated vector): .





Last Updated: November 30, 2024 at 4:34 AM