2015年11月25日 星期三

[ DM Practical MLT] (4) Algorithms - Clustering

Introduction (p169) 
Clustering techniques apply when there is no class to be predicted but rather when the instances are to be divided into natural groups. These clusters presumably reflect some mechanism at work in the domain from which instances are drawn, a mechanism that causes some instances to bear a stronger resemblance to each other than they do to the remaining instances. Clustering naturally requires different techniques to the classification and association learning methods we have considered so far. 

There are different ways in which the result of clustering can be expressed. The groups that are identified may be exclusive so that any instance belongs in only one group. Or they may be overlapping so that an instance may fall into several groups. Or they may be probabilistic, whereby an instance belongs to each group with a certain probability. Or they may be hierarchical, such that there is a crude division of instances into groups at the top level, and each of these groups is refined further—perhaps all the way down to individual instances. Really, the choice among these possibilities should be dictated by the nature of the mechanisms that are thought to underlie the particular clustering phenomenon. However, because these mechanisms are rarely known—the very existence of clusters is, after all, something that we’re trying to discover—and for pragmatic reasons too, the choice is usually dictated by the clustering tools that are available. 

We will examine an algorithm that forms clusters in numeric domains, partitioning instances into disjoint clusters. Like the basic nearest-neighbor method of instance-based learning, it is a simple and straightforward technique that has been used for several decades. In Chapter 6 we examine newer clustering methods that perform incremental and probabilistic clustering. 

Iterative distance-based clustering (p170) 
The classic clustering technique is called k-means. First, you specify in advance how many clusters are being sought: this is the parameter k. Then k points are chosen at random as cluster centers. All instances are assigned to their closest cluster center according to the ordinary Euclidean distance metric.Next the centroid, or mean, of the instances in each cluster is calculated—this is the “means” part. These centroids are taken to be new center values for their respective clusters. Finally, the whole process is repeated with the new cluster centers. Iteration continues until the same points are assigned to each cluster in consecutive rounds, at which stage the cluster centers have stabilized and will remain the same forever. 
 


This clustering method is simple and effective. It is easy to prove that choosing the cluster center to be the centroid minimizes the total squared distance from each of the cluster’s points to its center. Once the iteration has stabilized, each point is assigned to its nearest cluster center, so the overall effect is to minimize the total squared distance from all points to their cluster centers. But the minimum is a local one; there is no guarantee that it is the global minimum. The final clusters are quite sensitive to the initial cluster centers. Completely different arrangements can arise from small changes in the initial random choice. In fact, this is true of all practical clustering techniques: it is almost always infeasible to find globally optimal clusters. To increase the chance of finding a global minimum people often run the algorithm several times with different initial choices and choose the best final result—the one with the smallest total squared distance. 

Faster distance calculations (p171) 
The k-means clustering algorithm usually requires several iterations, each involving finding the distance of k cluster centers from every instance to determine its cluster. There are simple approximations that speed this up considerably. For example, you can project the dataset and make cuts along selected axes, instead of using the arbitrary hyperplane divisions that are implied by choosing the nearest cluster center. But this inevitably compromises the quality of the resulting clusters. 

Here’s a better way of speeding things up. Finding the closest cluster center is not so different from finding nearest neighbors in instance-based learning. Can the same efficient solutions—kD-trees and ball trees—be used? Yes! Indeed they can be applied in an even more efficient way, because in each iteration of k-means all the data points are processed together, whereas in instance-based learning test instances are processed individually. 

First, construct a kD-tree or ball tree for all the data points, which will remain static throughout the clustering procedure. Each iteration of k-means produces a set of cluster centers, and all data points must be examined and assigned to the nearest center. One way of processing the points is to descend the tree from the root until reaching a leaf and check each individual point in the leaf to find its closest cluster center. But it may be that the region represented by a higher interior node falls entirely within the domain of a single cluster center. In that case all the data points under that node can be processed in one blow! 

The aim of the exercise, after all, is to find new positions for the cluster centers by calculating the centroid of the points they contain. The centroid can be calculated by keeping a running vector sum of the points in the cluster, and a count of how many there are so far. At the end, just divide one by the other to find the centroid. Suppose that with each node of the tree we store the vector sum of the points within that node and a count of the number of points. If the whole node falls within the ambit of a single cluster, the running totals for that cluster can be updated immediately. If not, look inside the node by proceeding recursively down the tree. 

Figure 4.16 shows the same instances and ball tree as Figure 4.14, but with two cluster centers marked as black stars. Because all instances are assigned to the closest center, the space is divided in two by the thick line shown in Figure 4.16(a). Begin at the root of the tree in Figure 4.16(b), with initial values for the vector sum and counts for each cluster; all initial values are zero. Proceed recursively down the tree.When node A is reached, all points within it lie in cluster 1, so cluster 1’s sum and count can be updated with the sum and count for node A, and we need descend no further. Recursing back to node B, its ball straddles the boundary between the clusters, so its points must be examined individually. When node C is reached, it falls entirely within cluster 2; again, we can update cluster 2 immediately and need descend no further. The tree is only examined down to the frontier marked by the dashed line in Figure 4.16(b), and the advantage is that the nodes below need not be opened—at least, not on this particular iteration of k-means. Next time, the cluster centers will have changed and things may be different. 
 

Figure 4.16 A ball tree: (a) two cluster centers and their dividing line and (b) the corresponding tree. 

Lab Demo - GML 
Here we will use dataset from [ ML In Action ] Classifying with k-Nearest Neighbors (testSet.txt) and GML as our toolkit to demonstrate how clustering works. Firstly, let's import all necessary class 

  1. into our sample code:  
  2. import la.LA  
  3. import la.Matrix  
  4. import ml.cluster.KMeansBasic  
  5. import ml.data.ui.ClusterDataInXYChart  
  6. import org.jfree.ui.RefineryUtilities  
  7. import flib.util.Tuple as JT  
Then we are going to load in the test data set and visualize them: 
  1. /** 
  2. * Loading Testing Data Set 
  3. * @see 
  4. *  https://github.com/libing360/machine-learning-in-action/blob/master/Ch10/testSet.txt 
  5. * 
  6. * @return 
  7. *  Test Data Set As List 
  8. */  
  9. def loadDataSet()  
  10. {  
  11.     def dataSet = []  
  12.     new File("data/ch10/testSet.txt").eachLine { line->  
  13.         def testData = []  
  14.         line.split("\t").each {testData.add(Float.valueOf(it))}  
  15.         dataSet.add(testData)  
  16.     }  
  17.     return LA.newMtxByVal(dataSet)  
  18. }  
  19.   
  20. printf("\t[Info] Loading test data...\n")  
  21. Matrix testMat = loadDataSet()  
  22. printf("%s\n\n", testMat)  
  23.   
  24. // Visualize test data  
  25. List datas = new ArrayList()  
  26. testMat.r().times { ri->  
  27.     datas.add(new JT(1, testMat.r(ri)))  
  28. }  
  29. ClusterDataInXYChart demo = new ClusterDataInXYChart("k-Means Test Data Distribution", datas, 1);  
  30. demo.pack();  
  31. RefineryUtilities.centerFrameOnScreen(demo);  
  32. demo.setVisible(true);  
The data distribution look likes: 
 


Let's divide those data set into four clusters and visualize the result: 

  1. // Divide the data set into four clusters  
  2. KMeansBasic kmb = new KMeansBasic(k:4)  
  3.   
  4. // Using K-Means algorithm  
  5. JT kmo = kmb.kMeans(testMat)  
  6. Matrix centroids=kmo.get(0), clusterAssment=kmo.get(1)  
  7. datas = new ArrayList()  
  8. testMat.r().times { ri->  
  9.     datas.add(new JT(clusterAssment.v(ri,0)+1, testMat.r(ri)))  
  10. }  
  11. centroids.r().times{ ri->  
  12.     datas.add(new JT(0, centroids.r(ri)))  
  13. }  
  14.   
  15. // Visualize the result  
  16. demo = new ClusterDataInXYChart(String.format("k-Means Test Data Distribution (%d Class)", kmb.k), datas, 4);  
  17. demo.pack();  
  18. RefineryUtilities.centerFrameOnScreen(demo);  
  19. demo.setVisible(true);  
The cluster result look likes: 
 


The full sample script can be download here

Discussion (p173) 
Many variants of the basic k-means procedure have been developed. Some produce a hierarchical clustering by applying the algorithm with k = 2 to the overall dataset and then repeating, recursively, within each cluster. How do you choose k? Often nothing is known about the likely number of
clusters, and the whole point of clustering is to find out. One way is to try different values and choose the best.
 To do this you need to learn how to evaluate the success of machine learning, which is what Chapter 5 is about. We return to clustering in Section 6.6. 

Supplement 
[ ML In Action ] Classifying with k-Nearest Neighbors 
[ ML In Action ] Unsupervised learning : The k-means clustering algorithm (1) 
[ ML In Action ] Unsupervised learning : The k-means clustering algorithm (2)

沒有留言:

張貼留言

[Git 常見問題] error: The following untracked working tree files would be overwritten by merge

  Source From  Here 方案1: // x -----删除忽略文件已经对 git 来说不识别的文件 // d -----删除未被添加到 git 的路径中的文件 // f -----强制运行 #   git clean -d -fx 方案2: 今天在服务器上  gi...