• <ins id="pjuwb"></ins>
    <blockquote id="pjuwb"><pre id="pjuwb"></pre></blockquote>
    <noscript id="pjuwb"></noscript>
          <sup id="pjuwb"><pre id="pjuwb"></pre></sup>
            <dd id="pjuwb"></dd>
            <abbr id="pjuwb"></abbr>

            O(1) 的小樂

            Job Hunting

            公告

            記錄我的生活和工作。。。
            <2010年8月>
            25262728293031
            1234567
            891011121314
            15161718192021
            22232425262728
            2930311234

            統(tǒng)計

            • 隨筆 - 182
            • 文章 - 1
            • 評論 - 41
            • 引用 - 0

            留言簿(10)

            隨筆分類(70)

            隨筆檔案(182)

            文章檔案(1)

            如影隨形

            搜索

            •  

            最新隨筆

            最新評論

            閱讀排行榜

            評論排行榜

            k-means clustering

                  In statistics and machine learning, k-means clustering is a method of cluster analysis which aims topartition n observations into k clusters in which each observation belongs to the cluster with the nearestmean. It is similar to the expectation-maximization algorithm for mixtures of Gaussians in that they both attempt to find the centers of natural clusters in the data as well as in the iterative refinement approach employed by both algorithms.

             

            Description

            Given a set of observations (x1, x2, …, xn), where each observation is a d-dimensional real vector, k-means clustering aims to partition the n observations into k sets (k < n) S = {S1, S2, …, Sk} so as to minimize the within-cluster sum of squares (WCSS):

            \underset{\mathbf{S}} \operatorname{arg\,min} \sum_{i=1}^{k} \sum_{\mathbf x_j \in S_i} \left\| \mathbf x_j - \boldsymbol\mu_i \right\|^2

            where μi is the mean of points in Si.

             

            Algorithms

            Regarding computational complexity, the k-means clustering problem is:

            • NP-hard in general Euclidean space d even for 2 clusters [4][5]
            • NP-hard for a general number of clusters k even in the plane [6]
            • If k and d are fixed, the problem can be exactly solved in time O(ndk+1 log n), where n is the number of entities to be clustered [7]

            Thus, a variety of heuristic algorithms are generally used.

             

            所以注意到Algorithm是一個典型的NP問題,所以通常我們尋找使用的是啟發(fā)式方法。

            Standard algorithm

            The most common algorithm uses an iterative refinement technique.最常用的一個技巧是迭代求精。

            Due to its ubiquity it is often called the k-means algorithm; it is also referred to as , particularly in the computer science community.

            Given an initial set of k means m1(1),…,mk(1), which may be specified randomly or by some heuristic, the algorithm proceeds by alternating between two steps:[8]

            Assignment step: Assign each observation to the cluster with the closest mean (i.e. partition the observations according to the Voronoi diagram generated by the means(這里等價于把原空間根據(jù)Voronoi 圖劃分為k個,此處的范數(shù)指的是2范數(shù),即歐幾里得距離,和Voronoi圖對應(yīng))).
            S_i^{(t)} = \left\{ \mathbf x_j : \big\| \mathbf x_j - \mathbf m^{(t)}_i \big\| \leq \big\| \mathbf x_j - \mathbf m^{(t)}_{i^*} \big\| \text{ for all }i^*=1,\ldots,k \right\}
             
            Update step: Calculate the new means to be the centroid of the observations in the cluster.
            \mathbf m^{(t+1)}_i = \frac{1}{|S^{(t)}_i|} \sum_{\mathbf x_j \in S^{(t)}_i} \mathbf x_j
            重新計算means

            The algorithm is deemed to have converged when the assignments no longer change.

             

            整個算法的流程就是如上圖所示

             

            As it is a heuristic algorithm, there is no guarantee that it will converge to the global optimum, and the result may depend on the initial clusters. As the algorithm is usually very fast, it is common to run it multiple times with different starting conditions. However, in the worst case, k-means can be very slow to converge: in particular it has been shown that there exist certain point sets, even in 2 dimensions, on whichk-means takes exponential time, that is 2Ω(n), to converge[9][10]. These point sets do not seem to arise in practice: this is corroborated by the fact that the smoothed running time of k-means is polynomial[11].

            最壞的時間復雜度是O(2Ω(n)),但是在實踐中,一般表現(xiàn)是一個多項式算法。

            The "assignment" step is also referred to as expectation step, the "update step" as maximization step, making this algorithm a variant of the generalized expectation-maximization algorithm.

            Variations

            • The expectation-maximization algorithm (EM algorithm) maintains probabilistic assignments to clusters, instead of deterministic assignments, and multivariate Gaussian distributions instead of means.
            • k-means++ seeks to choose better starting clusters.
            • The filtering algorithm uses kd-trees to speed up each k-means step.[12]
            • Some methods attempt to speed up each k-means step using coresets[13] or the triangle inequality.[14]
            • Escape local optima by swapping points between clusters.[15]

            Discussion

            File:Iris Flowers Clustering kMeans.svg

            k-means clustering result for the Iris flower data set and actual species visualized using ELKI. Cluster means are marked using larger, semi-transparent symbols.

            File:ClusterAnalysis Mouse.svg

            k-means clustering and EM clustering on an artificial dataset ("mouse"). The tendency of k-means to produce equi-sized clusters leads to bad results, while EM benefits from the Gaussian distribution present in the data set

            The two key features of k-means which make it efficient are often regarded as its biggest drawbacks:

            A key limitation of k-means is its cluster model. The concept is based on spherical clusters that are separable in a way so that the mean value converges towards the cluster center. The clusters are expected to be of similar size, so that the assignment to the nearest cluster center is the correct assignment. When for example applying k-means with a value of k = 3 onto the well-known Iris flower data set, the result often fails to separate the three Iris species contained in the data set. With k = 2, the two visible clusters (one containing two species) will be discovered, whereas withk = 3 one of the two clusters will be split into two even parts. In fact, k = 2 is more appropriate for this data set, despite the data set containing 3 classes. As with any other clustering algorithm, the k-means result relies on the data set to satisfy the assumptions made by the clustering algorithms. It works very well on some data sets, while failing miserably on others.

            The result of k-means can also be seen as the Voronoi cells of the cluster means. Since data is split halfway between cluster means, this can lead to suboptimal splits as can be seen in the "mouse" example. The Gaussian models used by the Expectation-maximization algorithm (which can be seen as a generalization of k-means) are more flexible here by having both variances and covariances. The EM result is thus able to accommodate clusters of variable size much better than k-means as well as correlated clusters (not in this example).

             

            這篇是概念介紹篇,以后出代碼和一個K均值優(yōu)化的論文

            Fast Hierarchical Clustering Algorithm Using Locality-Sensitive Hashing

            posted on 2010-10-19 18:57 Sosi 閱讀(1597) 評論(0)  編輯 收藏 引用 所屬分類: Courses

            統(tǒng)計系統(tǒng)
            国产精品久久久久久久久鸭| 久久免费视频1| 欧美久久综合性欧美| 日本道色综合久久影院| 久久夜色精品国产| 久久亚洲精品中文字幕| 国产成人久久精品二区三区| 免费精品久久久久久中文字幕 | 区久久AAA片69亚洲| 亚洲va中文字幕无码久久不卡| 久久夜色精品国产亚洲| 国产成人综合久久精品红| 97久久久久人妻精品专区| 亚洲国产成人久久综合一区77| av无码久久久久不卡免费网站| 久久午夜无码鲁丝片午夜精品| 色偷偷偷久久伊人大杳蕉| 久久这里有精品视频| 久久国产精品久久| 无码日韩人妻精品久久蜜桃| 99久久国产亚洲高清观看2024| 无码人妻久久久一区二区三区| 亚洲国产精品无码久久久久久曰| 久久精品国产91久久麻豆自制 | 97久久香蕉国产线看观看| 思思久久好好热精品国产| 91亚洲国产成人久久精品| 99麻豆久久久国产精品免费| 欧美日韩精品久久久久| 久久久久久久免费视频| 手机看片久久高清国产日韩 | 久久精品中文騷妇女内射| 久久毛片一区二区| 久久人人爽人人爽人人片AV东京热| 久久综合久久综合久久| 精品久久香蕉国产线看观看亚洲| 日韩精品久久久肉伦网站| 亚洲国产精品成人久久| 色综合久久久久综合体桃花网| 亚洲AV无码久久精品狠狠爱浪潮| 亚洲av成人无码久久精品|