Jump to content

Machine learning/Unsupervised Learning/K-means Clustering

From Wikiversity

K-means is a method of clustering which is an unsupervised learning problem.

  • In this method the number of clusters is an input to the algorithm (hyper-parameter).
  • K-means method is a greedy algorithm. It is a special case of expectation maximization (EM) algorithm, in which we try to find maximum likelihood expectation (MLE).
  • K-means algorithm is not guaranteed to converge to the global mean of the loss function (sum of Euclidean distances from each cluster center). The global minimum problem is an NP hard problem.

Relation to Gaussian Mixture Model (GMM):

[edit | edit source]
  • GMM is a more probabilistic approach to clustering.
  • Expectation maximization (EM) algorithm is used to find a good gaussian mixture model to cluster the data.

Intuition

[edit | edit source]

Data points in each cluster are closer to the center of each cluster than the center of other clusters.

Algorithm

[edit | edit source]

Assume that the number of clusters is given by and the cluster centers are shown with .

We define a loss function for clustering and try to minimize it through the following greedy algorithm.

The loss function is defined as

Minimize with respect to and by following these two steps until convergence is achieved:

  1. Choose optimal for fixed by assigning to the nearest
  2. Choose optimal for fixed by updating to be the empirical mean of the points assigned to each cluster

Justification

[edit | edit source]

In this section, we show that choosing cluster center, , according to step 2 of the algorithm minimized loss function for a fixed set of assignment factors ()

In order to find the minimum value of loss function () as a function of , we find the point for which the gradient of the function is zero

Therefore, we have

Getting rid of the factor of 2 in the last expression we have We also have , which simplifies to