Prev: L14, Next: L16

Zoom: Link, TopHat: Link, GoogleForm: Link, Piazza: Link, Feedback: Link.
Tools
📗 You can expand all TopHat Quizzes and Discussions: , and print the notes: , or download all text areas as text file: .
📗 For visibility, you can resize all diagrams on page to have a maximum height that is percent of the screen height: .
📗 Calculator:
📗 Canvas:


Slide:




# Unsupervised Learning

📗 Supervised learning: \(\left(x_{1}, y_{1}\right), \left(x_{2}, y_{2}\right), ..., \left(x_{n}, y_{n}\right)\).
📗 Unsupervised learning: \(\left(x_{1}\right), \left(x_{2}\right), ..., \left(x_{n}\right)\).
➩ Clustering: separates items into groups.
➩ Novelty (outlier) detection: finds items that are different (two groups).
➩ Dimensionality reduction: represents each item by a lower dimensional feature vector while maintaining key characteristics.
📗 Unsupervised learning applications:
➩ Google news.
➩ Google photo.
➩ Image segmentation.
➩ Text processing.
➩ Data visualization.
➩ Efficient storage.
➩ Noise removal.



# Hierarchical Clustering

📗 Hierarchical clustering iteratively merges groups: Link, Wikipedia.
➩ Start with each items as a cluster.
➩ Merge clusters that are closest to each other.
➩ Result in a binary tree with close clusters as children.
TopHat Discussion ID:
📗 [1 points] Given the following dataset, use hierarchical clustering to divide the points into groups. Drag one point to another point to merge them into one cluster. Click on a point to move it out of the cluster.





# Distance between Points

📗 Distance between points in \(m\) dimensional space is usually measured by Euclidean distance (also called \(L_{2}\) distance).
➩ Euclidean distance (\(L_{2}\)): \(\left\|x_{i} - x_{j}\right\|_{2} = \sqrt{\left(x_{i 1} - x_{j 1}\right)^{2} + \left(x_{i 2} - x_{j 2}\right)^{2} + ... + \left(x_{i m} - x_{j m}\right)^{2}}\): Wikipedia.
📗 Distances can also be measured by \(L_{1}\) or \(L_{\infty}\) distances.
➩ Manhattan distance (\(L_{1}\)): \(\left\|x_{i} - x_{j}\right\|_{1} = \left| x_{i 1} - x_{j 1} \right| + \left| x_{i 2} - x_{j 2} \right| + ... + \left| x_{i m} - x_{j m} \right|\): Wikipedia.
➩ Chebyshev distance (\(L_{\infty}\)): \(\left\|x_{i} - x_{j}\right\|_{\infty} = \displaystyle\max\left\{\left| x_{i 1} - x_{j 1} \right|, \left| x_{i 2} - x_{j 2} \right|, ..., \left| x_{i m} - x_{j m} \right|\right\}\): Wikipedia
TopHat Discussion
📗 [1 points] Move the green point so that it is within 100 pixels of the red point measured by the distance. Highlight the region containing all points within 100 pixels of the red point.

Distance:



# Distance between Clusters

📗 Distance between clusters (group of points) can be measured by single linkage distance, complete linkage distance, or average linkage distance.
➩ Single linkage distance: the shortest distance from any item in one cluster to any item in the other cluster: Wikipedia.
➩ Complete linkage distance: the longest distance from any item in one cluster to any item in the other cluster: Wikipedia.
➩ Average linkage distance: the average distance from any item in one cluster to any item in the other cluster (average of distances, not distance between averages): Wikipedia.
TopHat Discussion ID:
📗 [1 points] Highlight the Euclidean distance between the two clusters (red and blue) measured by the linkage distance.

Distance:
TopHat Quiz (Past Exam Question) ID:
📗 [4 points] You are given the distance table. Consider the next iteration of hierarchical agglomerative clustering (another name for the hierarchical clustering method we covered in the lectures) using linkage. What will the new values be in the resulting distance table corresponding to the new clusters? If you merge two columns (rows), put the new distances in the column (row) with the smaller index. For example, if you merge columns 2 and 4, the new column 2 should contain the new distances and column 4 should be removed, i.e. the columns and rows should be in the order (1), (2 and 4), (3), (5).

\(d\) =
📗 Answer (matrix with multiple lines, each line is a comma separated vector): .




# Number of Clusters

📗 The number of clusters should be chosen based on prior knowledge about the dataset.
📗 The algorithm can also stop merging as soon as all the between-cluster distances are larger than some fixed threshold.
📗 The binary tree generated by hierarachical clustering is often called dendrogram: Wikipedia.



# K Means Clustering

📗 K-means clustering (2-means, 3-means, ...) iteratively updates a fixed number of cluster centers: Link, Wikipedia.
➩ Start with K random cluster centers.
➩ Assign each item to its closest center.
➩ Update all cluster centers as the center of its items.
TopHat Discussion ID:
📗 [1 points] Given the following dataset, use k-means clustering to divide the points into groups. Move the centers and click on the center to move it to the center of the points closest to the center.

Total distortion:



# Total Distortion

📗 K means clustering tries to minimize the total distances of all items to their cluster centers. The total distance is called total distortion or inertia.
📗 Suppose the cluster centers are \(c_{1}, c_{2}, ..., c_{K}\), and the cluster center for an item \(x_{i}\) is \(c\left(x_{i}\right)\) (one of \(c_{1}, c_{2}, ..., c_{K}\)), then the total distortion is \(\left\|x_{1} - c\left(x_{1}\right)\right\|_{2}^{2} + \left\|x_{2} - c\left(x_{2}\right)\right\|_{2}^{2} + ... + \left\|x_{n} - c\left(x_{n}\right)\right\|_{2}^{2}\).
Math Note
📗 The K means procedure is similar to the gradient descent method to minimize the total distortion: Wikipedia.
➩ The gradient of the total distortion with respect to the cluster centers is \(-2 \displaystyle\sum_{x : c\left(x\right) = c_{k}} \left(x - c_{k}\right)\), setting this to \(0\) to obtain the update step formula \(c_{k} = \dfrac{1}{n_{k}} \displaystyle\sum_{x: c\left(x\right) = c_{k}} x\), where \(n_{k}\) is the number of items that belongs to cluster \(k\), and the sum is over all items in cluster \(k\).
➩ One issue with some optimization algorithms like gradient descent is that they sometimes converge to local minima that are not the global minimum. This is also the case for K means clustering: Wikipedia.
📗 [1 points] Move the points to see the derivatives (slope of tangent line) of the function \(x^{2}\):

Point: 0
Learning rate: 0.5
Derivative: 0

TopHat Quiz (Past Exam Question) ID:
📗 [3 points] Perform k-means clustering on six points: \(x_{1}\) = , \(x_{2}\) = , \(x_{3}\) = , \(x_{4}\) = , \(x_{5}\) = , \(x_{6}\) = . Initially the cluster centers are at \(c_{1}\) = , \(c_{2}\) = . Run k-means for one iteration (assign the points, update center once and reassign the points once). Break ties in distances by putting the point in the cluster with the smaller index (i.e. favor cluster 1). What is the reduction in total distortion? Use Euclidean distance and calculate the total distortion by summing the squares of the individual distances to the center.

📗 Note: the red points are the cluster centers and the other points are the training items.
📗 Answer: .




# Number of Clusters

📗 There are a few ways to choose the number of clusters K.
➩ K can be chosen based on prior knowledge about the items.
➩ K cannot be chosen by minimizing total distortion since the total distortion is always minimized at \(0\) when \(K = n\) (number of clusters = number of training items).
➩ K can be chosen by minimizing total distortion plus some regularizer, for example, \(c \cdot m K \log\left(n\right)\) where \(c\) is a fixed constant and \(m\) is the number of features for each item.
TopHat Quiz
📗 [1 points] Upload an image and use K-means clustering to group the pixels into \(K\) clusters. Find an appropriate value of \(K\):
. Click on the image to perform the clustering for iterations.

Number of clusters:




# Initial Clusters

📗 There are a few ways to initialize the clusters: Link.
➩ The initial cluster centers can be randomly chosen in the domain.
➩ The initial cluster centers can be randomly chosen as \(K\) distinct items.
➩ The first cluster center can be a random item, the second cluster center can be the item that is the farthest from the first item, the third cluster center can be the item that is the farthest from the first two items, ...



📗 Notes and code adapted from the course taught by Professors Jerry Zhu, Yudong Chen, Yingyu Liang, and Charles Dyer.
📗 Content from note blocks marked "optional" and content from Wikipedia and other demo links are helpful for understanding the materials, but will not be explicitly tested on the exams.
📗 Please use Ctrl+F5 or Shift+F5 or Shift+Command+R or Incognito mode or Private Browsing to refresh the cached JavaScript.
📗 You can expand all TopHat Quizzes and Discussions: , and print the notes: , or download all text areas as text file: .
📗 If there is an issue with TopHat during the lectures, please submit your answers on paper (include your Wisc ID and answers) or this Google form Link at the end of the lecture.
📗 Anonymous feedback can be submitted to: Form. Non-anonymous feedback and questions can be posted on Piazza: Link

Prev: L14, Next: L16





Last Updated: August 22, 2025 at 10:06 AM