**Introduction**(p161)

In instance-based learning the training examples are stored verbatim, and

**a distance function is used to determine which member of the training set is closest to an unknown test instance. Once the nearest training instance has been located, its class is predicted for the test instance**. The only remaining problem is defining the distance function, and that is not very difficult to do, particularly if the attributes are numeric.

**The distance function**

Although there are other possible choices, most instance-based learners use

**Euclidean distance**. The distance between an instance with attribute values a1^(1),a2^(1), . . . , ak^(1) (where

*k*is the number of attributes) and one with values a1^(2),a2^(2), . . . , ak^(2) is defined as

When comparing distances it is not necessary to perform the square root operation; the sums of squares can be compared directly. One alternative to the Euclidean distance is the Manhattan or city-block metric, where the difference between attribute values is not squared but just added up (after taking the absolute value). Others are obtained by taking powers higher than the square.

**Higher powers increase the influence of large differences at the expense of small differences.**Generally, the Euclidean distance represents a good compromise. Other distance metrics may be more appropriate in special circumstances.

**The key is to think of actual instances and what it means for them to be separated by a certain distance—what would twice that distance mean, for example?**

Different attributes are measured on different scales, so if the Euclidean distance formula were used directly,

**the effects of some attributes might be completely dwarfed by others that had larger scales of measurement. Consequently, it is usual to normalize all attribute values to lie between 0 and 1, by calculating**

where vi is the actual value of attribute

*i*, and the maximum and minimum are taken over all instances in the training set.

These formulae implicitly assume numeric attributes. Here, the difference between two values is just the numerical difference between them, and it is this difference that is squared and added to yield the distance function.

**For nominal attributes that take on values that are symbolic rather than numeric, the difference between two values that are not the same is often taken to be one, whereas if the values are the same the difference is zero**. No scaling is required in this case because only the values 0 and 1 are used.

**A common policy for handling missing values is as follows.**For nominal attributes, assume that a missing feature is maximally different from any other feature value. Thus if either or both values are missing, or if the values are different, the difference between them is taken as one; the difference is zero only if they are not missing and both are the same. For numeric attributes, the difference between two missing values is also taken as one. However, if just one value is missing, the difference is often taken as either the (normalized) size of the other value or one minus that size, whichever is larger. This means that if values are missing, the difference is as large as it can possibly be.

**Finding nearest neighbors efficiently**(p162)

Although

**instance-based learning is simple and effective, it is often slow**. The obvious way to find which member of the training set is closest to an unknown test instance is to calculate the distance from every member of the training set and select the smallest. This procedure is linear in the number of training instances: in other words, the time it takes to make a single prediction is proportional to the number of training instances. Processing an entire test set takes time proportional to the product of the number of instances in the training and test sets.

Nearest neighbors can be found more efficiently by representing the training set as a tree, although it is not quite obvious how. One suitable structure is a

**kD-tree**. This is a binary tree that divides the input space with a hyperplane and then splits each partition again, recursively. All splits are made parallel to one of the axes, either vertically or horizontally, in the two-dimensional case. The data structure is called a kD-tree because it stores a set of points in

*k*dimensional space,

*k*being the number of attributes.

Figure 4.12(a) gives a small example with

*k*= 2, and Figure 4.12(b) shows the four training instances it represents, along with the hyperplanes that constitute the tree. Note that these hyperplanes are not decision boundaries: decisions are made on a nearest-neighbor basis as explained later. The first split is horizontal (h), through the point (7,4)—this is the tree’s root. The left branch is not split further: it contains the single point (2,2), which is a leaf of the tree. The right branch is split vertically (v) at the point (6,7). Its left child is empty, and its right child contains the point (3,8). As this example illustrates, each region contains just one point—or, perhaps, no points. Sibling branches of the tree— for example, the two daughters of the root in Figure 4.12(a)—are not necessarily developed to the same depth. Every point in the training set corresponds to a single node, and up to half are leaf nodes.

**Figure 4.12 A kD-tree for four training instances: (a) the tree and (b) instances and splits.**

How do you build a kD-tree from a dataset? Can it be updated efficiently as new training examples are added? And how does it speed up nearest-neighbor calculations? We tackle the last question first.

To locate the nearest neighbor of a given target point, follow the tree down from its root to locate the region containing the target. Figure 4.13 shows a space like that of Figure 4.12(b) but with a few more instances and an extra boundary. The target, which is not one of the instances in the tree, is marked by a star.

**The leaf node of the region containing the target is colored black. This is not necessarily the target’s closest neighbor, as this example illustrates, but it is a good first approximation.**In particular, any nearer neighbor must lie closer— within the dashed circle in Figure 4.13. To determine whether one exists, first check whether it is possible for a closer neighbor to lie within the node’s sibling. The black node’s sibling is shaded in Figure 4.13, and the circle does not intersect it, so the sibling cannot contain a closer neighbor. Then back up to the parent node and check its sibling—which here covers everything above the horizontal line. In this case it must be explored, because the area it covers intersects with the best circle so far. To explore it, find its daughters (the original point’s two aunts), check whether they intersect the circle (the left one does not, but the right one does), and descend to see whether it contains a closer point (it does).

**In a typical case, this algorithm is far faster than examining all points to find the nearest neighbor.**The work involved in finding the initial approximate nearest neighbor—the black point in Figure 4.13—depends on the depth of the tree, given by the logarithm of the number of nodes, log2(n). The amount of work involved in backtracking to check whether this really is the nearest neighbor depends a bit on the tree, and on how good the initial approximation is. But for a well-constructed tree whose nodes are approximately square, rather than long skinny rectangles, it can also be shown to be logarithmic in the number of nodes.

How do you build a good tree for a set of training examples? The problem boils down to selecting the first training instance to split at and the direction of the split. Once you can do that, apply the same method recursively to each child of the initial split to construct the entire tree.

**To find a good direction for the split, calculate the variance of the data points along each axis individually, select the axis with the greatest variance, and create a splitting hyperplane perpendicular to it.**To find a good place for the hyperplane, locate the median value along that axis and select the corresponding point. This makes the split perpendicular to the direction of greatest spread, with half the points lying on either side. This produces a well-balanced tree. To avoid long skinny regions it is best for successive splits to be along different axes, which is likely because the dimension of greatest variance is chosen at each stage. However, if the distribution of points is badly skewed, choosing the median value may generate several successive splits in the same direction, yielding long, skinny hyperrectangles. A better strategy is to calculate the mean rather than the median and use the point closest to that. The tree will not be perfectly balanced, but its regions will tend to be squarish because there is a greater chance that different directions will be chosen for successive splits.

**An advantage of instance-based learning over most other machine learning methods is that new examples can be added to the training set at any time.**To retain this advantage when using a kD-tree,we need to be able to update it incrementally with new data points. To do this, determine which leaf node contains the new point and find its hyperrectangle. If it is empty, simply place the new point there. Otherwise split the hyperrectangle, splitting it along its longest dimension to preserve squareness. This simple heuristic does not guarantee that adding a series of points will preserve the tree’s balance, nor that the hyperrectangles will be well shaped for nearest-neighbor search.

**It is a good idea to rebuild the tree from scratch occasionally—for example, when its depth grows to twice the best possible depth.**

As we have seen,

**kD-trees are good data structures for finding nearest neighbors efficiently. However, they are not perfect. Skewed datasets present a basic conflict between the desire for the tree to be perfectly balanced and the desire for regions to be squarish.**

**More importantly, rectangles—even squares—are not the best shape to use anyway, because of their corners.**If the dashed circle in Figure 4.13 were any bigger, which it would be if the black instance were a little further from the target, it would intersect the lower right-hand corner of the rectangle at the top left and then that rectangle would have to be investigated, too—despite the fact that the training instances that define it are a long way from the corner in question. The corners of rectangular regions are awkward.

**The solution? Use hyperspheres, not hyperrectangles.**Neighboring spheres may overlap whereas rectangles can abut, but this is not a problem because the nearest-neighbor algorithm for kD-trees described previously does not depend on the regions being disjoint. A data structure called a ball tree defines k-dimensional hyperspheres (“balls”) that cover the data points, and arranges them into a tree.

**Figure 4.14 Ball tree for 16 training instances: (a) instances and balls and (b) the tree.**

Figure 4.14(a) shows 16 training instances in two-dimensional space, overlaid by a pattern of overlapping circles, and Figure 4.14(b) shows a tree formed from these circles. Circles at different levels of the tree are indicated by different styles of dash, and the smaller circles are drawn in shades of gray. Each node of the tree represents a ball, and the node is dashed or shaded according to the same convention so that you can identify which level the balls are at. To help you understand the tree, numbers are placed on the nodes to show how many data points are deemed to be inside that ball. But be careful: this is not necessarily the same as the number of points falling within the spatial region that the ball represents. The regions at each level sometimes overlap, but points that fall into the overlap area are assigned to only one of the overlapping balls (the diagram does not show which one). Instead of the occupancy counts in Figure 4.14(b) the nodes of actual ball trees store the center and radius of their ball; leaf nodes record the points they contain as well.

To use a ball tree to find the nearest neighbor to a given target, start by traversing the tree from the top down to locate the leaf that contains the target and find the closest point to the target in that ball. This gives an upper bound for the target’s distance from its nearest neighbor. Then, just as for the kD-tree, examine the sibling node.

**If the distance from the target to the sibling’s center exceeds its radius plus the current upper bound, it cannot possibly contain a closer point; otherwise the sibling must be examined by descending the tree further**. In Figure 4.15 the target is marked with a star and the black dot is its closest currently known neighbor. The entire contents of the gray ball can be ruled out: it cannot contain a closer point because its center is too far away. Proceed recursively back up the tree to its root, examining any ball that may possibly contain a point nearer than the current upper bound.

**Figure 4.15 Ruling out an entire ball (gray) based on a target point (star) and its current nearest neighbor.**

Ball trees are built from the top down, and as with kD-trees the basic problem is to find a good way of splitting a ball containing a set of data points into two. In practice you do not have to continue until the leaf balls contain just two points: you can stop earlier, once a predetermined minimum number is reached—and the same goes for kD-trees.Here is one possible splitting method.

Choose the point in the ball that is farthest from its center, and then a second point that is farthest from the first one. Assign all data points in the ball to the closest one of these two cluster centers, then compute the centroid of each cluster and the minimum radius required for it to enclose all the data points it represents. This method has the merit that the cost of splitting a ball containing

*n*points is only linear in

*n*. There are more elaborate algorithms that produce tighter balls, but they require more computation.We will not describe sophisticated algorithms for constructing ball trees or updating them incrementally as new training instances are encountered.

**Discussion**

Nearest-neighbor instance-based learning is simple and often works very well.

**In the method described previously each attribute has exactly the same influence on the decision, just as it does in the Naïve Bayes method. Another problem is that the database can easily become corrupted by noisy exemplars.**One solution is to adopt the k-nearest-neighbor strategy, where some fixed, small, number

*k*of nearest neighbors—say five—are located and used together to determine the class of the test instance through a simple majority vote. (Note that we used

*k*to denote the number of attributes earlier; this is a different, independent usage.) Another way of proofing the database against noise is to choose the exemplars that are added to it selectively and judiciously; improved procedures, described in Chapter 6, address these shortcomings.

The nearest-neighbor method originated many decades ago, and statisticians analyzed k-nearest-neighbor schemes in the early 1950s.

**If the number of training instances is large, it makes intuitive sense to use more than one nearest neighbor, but clearly this is dangerous if there are few instances.**It can be shown that when k and the number n of instances both become infinite in such a way that k/n ~= 0, the probability of error approaches the theoretical minimum for the dataset. The nearest-neighbor method was adopted as a classification method in the early 1960s and has been widely used in the field of pattern recognition for more than three decades.

Nearest-neighbor classification was notoriously slow until kD-trees began to be applied in the early 1990s, although the data structure itself was developed much earlier.

**In practice, these trees become inefficient when the dimension of the space increases and are only worthwhile when the number of attributes is small—up to 10.**Ball trees were developed much more recently and are an instance of a more general structure sometimes called a

**metric tree**. Sophisticated algorithms can create metric trees that deal successfully with thousands of dimensions.

**Instead of storing all training instances, you can compress them into regions. A very simple technique, mentioned at the end of Section 4.1, is to just record the range of values observed in the training data for each attribute and category.**Given a test instance, you work out which ranges the attribute values fall into and choose the category with the greatest number of correct ranges for that instance. A slightly more elaborate technique is to construct intervals for each attribute and use the training set to count the number of times each class occurs for each interval on each attribute. Numeric attributes can be discretized into intervals, and “intervals” consisting of a single point can be used for nominal ones. Then, given a test instance, you can determine which intervals it resides in and classify it by voting, a method called

**voting feature intervals**. These methods are very approximate, but very fast, and can be useful for initial analysis of large datasets.

**Supplement**

*

__Geometric Applications of BSTs - KD-tree__

## 沒有留言:

## 張貼留言