In the previous section we covered Gaussian mixture models (GMM), which are a kind of hybrid between a clustering estimator and a density estimator. Recall that a density estimator is an algorithm that takes a D-dimensional dataset and produces an estimate of the D-dimensional probability distribution which that data is drawn from. The GMM algorithm accomplishes this by representing the density as a weighted sum of Gaussian distributions. Kernel density estimation (KDE) is in some senses an algorithm that takes the mixture-of-Gaussians idea to its logical extreme: it uses a mixture consisting of one Gaussian component per point, resulting in an essentially nonparametric estimator of density. In this section, we will explore the motivation and uses of KDE. We begin with the standard imports:
- %matplotlib inline
- import matplotlib.pyplot as plt
- import seaborn as sns; sns.set()
- import numpy as np
As already discussed, a density estimator is an algorithm that seeks to model the probability distribution that generated a dataset. For one-dimensional data, you are probably already familiar with one simple density estimator: the histogram. A histogram divides the data into discrete bins, counts the number of points that fall in each bin, and then visualizes the results in an intuitive manner.
For example, let’s create some data that is drawn from two normal distributions:
- def make_data(N, f=0.3, rseed=1):
- rand = np.random.RandomState(rseed)
- x = rand.randn(N)
- x[int(f * N):] += 5
- return x
- x = make_data(1000)
- hist = plt.hist(x, bins=30, density=True)
Figure 5-140. Data drawn from a combination of normal distributions
Notice that for equal binning, this normalization simply changes the scale on the y-axis, leaving the relative heights essentially the same as in a histogram built from counts. This normalization is chosen so that the total area under the histogram is equal to 1, as we can confirm by looking at the output of the histogram function:
- density, bins, patches = hist
- widths = bins[1:] - bins[:-1]
- (density * widths).sum()
One of the issues with using a histogram as a density estimator is that the choice of bin size and location can lead to representations that have qualitatively different features. For example, if we look at a version of this data with only 20 points, the choice of how to draw the bins can lead to an entirely different interpretation of the data! Consider this example (visualized in Figure 5-141):
- x = make_data(20)
- bins = np.linspace(-5, 10, 10)
- fig, ax = plt.subplots(1, 2, figsize=(12, 4), sharex=True, sharey=True, subplot_kw={'xlim':(-4, 9), 'ylim':(-0.02, 0.3)})
- fig.subplots_adjust(wspace=0.05)
- for i, offset in enumerate([0.0, 0.6]):
- ax[i].hist(x, bins=bins + offset, density=True)
- ax[i].plot(x, np.full_like(x, -0.01), '|k', markeredgewidth=1)
Figure 5-141. The problem with histograms: the location of bins can affect interpretation
On the left, the histogram makes clear that this is a bimodal distribution. On the right, we see a unimodal distribution with a long tail. Without seeing the preceding code, you would probably not guess that these two histograms were built from the same data. With that in mind, how can you trust the intuition that histograms confer? And how might we improve on this?
Stepping back, we can think of a histogram as a stack of blocks, where we stack one block within each bin on top of each point in the dataset. Let’s view this directly (Figure 5-142):
- fig, ax = plt.subplots()
- bins = np.arange(-3, 8)
- ax.plot(x, np.full_like(x, -0.1), '|k', markeredgewidth=1)
- for count, edge in zip(*np.histogram(x, bins)):
- for i in range(count):
- ax.add_patch(plt.Rectangle((edge, i), 1, 1, alpha=0.5))
- ax.set_xlim(-4, 8)
- ax.set_ylim(-0.2, 8)
Figure 5-142. Histogram as stack of blocks
- x_d = np.linspace(-4, 8, 2000)
- density = sum((abs(xi - x_d) < 0.5) for xi in x)
- plt.fill_between(x_d, density, alpha=0.5)
- plt.plot(x, np.full_like(x, -0.1), '|k', markeredgewidth=1)
- plt.axis([-4, 8, -0.2, 8]);
Figure 5-143. A “histogram” where blocks center on each individual point; this is an example of a kernel density estimate
The result looks a bit messy, but is a much more robust reflection of the actual data characteristics than is the standard histogram. Still, the rough edges are not aesthetically pleasing, nor are they reflective of any true properties of the data. In order to smooth them out, we might decide to replace the blocks at each location with a smooth function, like a Gaussian. Let’s use a standard normal curve at each point instead of a block (Figure 5-144):
- from scipy.stats import norm
- x_d = np.linspace(-4, 8, 1000)
- density = sum(norm(xi).pdf(x_d) for xi in x)
- plt.fill_between(x_d, density, alpha=0.5)
- plt.plot(x, np.full_like(x, -0.1), '|k', markeredgewidth=1)
- plt.axis([-4, 8, -0.2, 5]);
Figure 5-144. A kernel density estimate with a Gaussian kernel
This smoothed-out plot, with a Gaussian distribution contributed at the location of each input point, gives a much more accurate idea of the shape of the data distribution, and one that has much less variance (i.e., changes much less in response to differences in sampling). These last two plots are examples of kernel density estimation in one dimension: the first uses a so-called “tophat” kernel and the second uses a Gaussian kernel. We’ll now look at kernel density estimation in more detail.
Kernel Density Estimation in Practice
The free parameters of kernel density estimation are the kernel, which specifies the shape of the distribution placed at each point, and the kernel bandwidth, which controls the size of the kernel at each point. In practice, there are many kernels you might use for a kernel density estimation: in particular, the Scikit-Learn KDE implementation supports one of six kernels, which you can read about in Scikit-Learn’s Density Estimation documentation.
While there are several versions of kernel density estimation implemented in Python (notably in the SciPy and StatsModels packages), I prefer to use Scikit-Learn’s version because of its efficiency and flexibility. It is implemented in the sklearn.neigh bors.KernelDensity estimator, which handles KDE in multiple dimensions with one of six kernels and one of a couple dozen distance metrics. Because KDE can be fairly computationally intensive, the Scikit-Learn estimator uses a tree-based algorithm under the hood and can trade off computation time for accuracy using the atol (absolute tolerance) and rtol (relative tolerance) parameters. We can determine the kernel bandwidth, which is a free parameter, using Scikit-Learn’s standard cross validation tools, as we will soon see.
Let’s first see a simple example of replicating the preceding plot using the Scikit-Learn KernelDensity estimator (Figure 5-145):
- from sklearn.neighbors import KernelDensity
- # instantiate and fit the KDE model
- kde = KernelDensity(bandwidth=1.0, kernel='gaussian')
- kde.fit(x[:, None])
- # score_samples returns the log of the probability density
- logprob = kde.score_samples(x_d[:, None])
- plt.fill_between(x_d, np.exp(logprob), alpha=0.5)
- plt.plot(x, np.full_like(x, -0.01), '|k', markeredgewidth=1)
- plt.ylim(-0.02, 0.22)
Figure 5-145. A kernel density estimate computed with Scikit-Learn
The result here is normalized such that the area under the curve is equal to 1.
Selecting the bandwidth via cross-validation
The choice of bandwidth within KDE is extremely important to finding a suitable density estimate, and is the knob that controls the bias–variance trade-off in the estimate of density: too narrow a bandwidth leads to a high-variance estimate (i.e., overfitting), where the presence or absence of a single point makes a large difference. Too wide a bandwidth leads to a high-bias estimate (i.e., underfitting) where the structure in the data is washed out by the wide kernel.
There is a long history in statistics of methods to quickly estimate the best bandwidth based on rather stringent assumptions about the data: if you look up the KDE implementations in the SciPy and StatsModels packages, for example, you will see implementations based on some of these rules.
In machine learning contexts, we’ve seen that such hyperparameter tuning often is done empirically via a cross-validation approach. With this in mind, the KernelDensity estimator in Scikit-Learn is designed such that it can be used directly within Scikit-Learn’s standard grid search tools. Here we will use GridSearchCV to optimize the bandwidth for the preceding dataset. Because we are looking at such a small dataset, we will use leave-one-out cross-validation, which minimizes the reduction in training set size for each cross-validation trial:
- from sklearn.model_selection import GridSearchCV
- from sklearn.model_selection import LeaveOneOut
- bandwidths = 10 ** np.linspace(-1, 1, 100)
- grid = GridSearchCV(KernelDensity(kernel='gaussian'), {'bandwidth': bandwidths}, cv=len(x))
- grid.fit(x[:, None]);
- grid.best_params_
The optimal bandwidth happens to be very close to what we used in the example plot earlier, where the bandwidth was 1.0 (i.e., the default width of scipy.stats.norm).
Example: KDE on a Sphere
Perhaps the most common use of KDE is in graphically representing distributions of points. For example, in the Seaborn visualization library (discussed earlier in “Visualization with Seaborn” on page 311), KDE is built in and automatically used to help visualize points in one and two dimensions.
Here we will look at a slightly more sophisticated use of KDE for visualization of distributions. We will make use of some geographic data that can be loaded with Scikit-Learn: the geographic distributions of recorded observations of two South American mammals, Bradypus variegatus (the brown-throated sloth) and Microryzomys minutus (the forest small rice rat).
With Scikit-Learn, we can fetch this data as follows:
- from sklearn.datasets import fetch_species_distributions
- data = fetch_species_distributions()
- # Get matrices/arrays of species IDs and locations
- latlon = np.vstack([data.train['dd lat'], data.train['dd long']]).T
- species = np.array([d.decode('ascii').startswith('micro') for d in data.train['species']], dtype='int')
- from mpl_toolkits.basemap import Basemap
- from sklearn.datasets.species_distributions import construct_grids
- xgrid, ygrid = construct_grids(data)
- # plot coastlines with Basemap
- m = Basemap(projection='cyl', resolution='c', llcrnrlat=ygrid.min(), urcrnrlat=ygrid.max(),
- llcrnrlon=xgrid.min(), urcrnrlon=xgrid.max())
- m.drawmapboundary(fill_color='#DDEEFF')
- m.fillcontinents(color='#FFEEDD')
- m.drawcoastlines(color='gray', zorder=2)
- m.drawcountries(color='gray', zorder=2)
- # plot locations
- m.scatter(latlon[:, 1], latlon[:, 0], zorder=3, c=species, cmap='rainbow', latlon=True);
Figure 5-146. Location of species in training data
Unfortunately, this doesn’t give a very good idea of the density of the species, because points in the species range may overlap one another. You may not realize it by looking at this plot, but there are over 1,600 points shown here! Let’s use kernel density estimation to show this distribution in a more interpretable way: as a smooth indication of density on the map. Because the coordinate system here lies on a spherical surface rather than a flat plane, we will use the haversine distance metric, which will correctly represent distances on a curved surface.
There is a bit of boilerplate code here (one of the disadvantages of the Basemap toolkit), but the meaning of each code block should be clear (Figure 5-147):
Figure 5-147. A kernel density representation of the species distributions
Compared to the simple scatter plot we initially used, this visualization paints a much clearer picture of the geographical distribution of observations of these two species.
Example: Not-So-Naive Bayes
This example looks at Bayesian generative classification with KDE, and demonstrates how to use the Scikit-Learn architecture to create a custom estimator.
In “In Depth: Naive Bayes Classification”, we took a look at naive Bayesian classification, in which we created a simple generative model for each class, and used these models to build a fast classifier. For naive Bayes, the generative model is a simple axis-aligned Gaussian. With a density estimation algorithm like KDE, we can remove the “naive” element and perform the same classification with a more sophisticated generative model for each class. It’s still Bayesian classification, but it’s no longer naive.
The general approach for generative classification is this:
The algorithm is straightforward and intuitive to understand; the more difficult piece is couching it within the Scikit-Learn framework in order to make use of the grid search and cross-validation architecture. This is the code that implements the algorithm within the Scikit-Learn framework; we will step through it following the code block:
- from sklearn.base import BaseEstimator, ClassifierMixin
- class KDEClassifier(BaseEstimator, ClassifierMixin):
- """Bayesian generative classification based on KDE
- Parameters
- ----------
- bandwidth : float
- the kernel bandwidth within each class
- kernel : str
- the kernel name, passed to KernelDensity
- """
- def __init__(self, bandwidth=1.0, kernel='gaussian'):
- self.bandwidth = bandwidth
- self.kernel = kernel
- def fit(self, X, y):
- self.classes_ = np.sort(np.unique(y))
- training_sets = [X[y == yi] for yi in self.classes_]
- self.models_ = [KernelDensity(bandwidth=self.bandwidth, kernel=self.kernel).fit(Xi) for Xi in training_sets]
- self.logpriors_ = [np.log(Xi.shape[0] / X.shape[0]) for Xi in training_sets]
- return self
- def predict_proba(self, X):
- logprobs = np.array([model.score_samples(X) for model in self.models_]).T
- result = np.exp(logprobs + self.logpriors_)
- return result / result.sum(1, keepdims=True)
- def predict(self, X):
- return self.classes_[np.argmax(self.predict_proba(X), 1)]
Let’s step through this code and discuss the essential features:
- from sklearn.base import BaseEstimator, ClassifierMixin
- class KDEClassifier(BaseEstimator, ClassifierMixin):
- """Bayesian generative classification based on KDE
- Parameters
- ----------
- bandwidth : float
- the kernel bandwidth within each class
- kernel : str
- the kernel name, passed to KernelDensity
- """
Next comes the class initialization method:
- def __init__(self, bandwidth=1.0, kernel='gaussian'):
- self.bandwidth = bandwidth
- self.kernel = kernel
Next comes the fit() method, where we handle training data:
- def fit(self, X, y):
- self.classes_ = np.sort(np.unique(y))
- training_sets = [X[y == yi] for yi in self.classes_]
- self.models_ = [KernelDensity(bandwidth=self.bandwidth, kernel=self.kernel).fit(Xi) for Xi in training_sets]
- self.logpriors_ = [np.log(Xi.shape[0] / X.shape[0]) for Xi in training_sets]
- return self
- label = model.fit(X, y).predict(X)
Finally, we have the logic for predicting labels on new data:
- def predict_proba(self, X):
- logprobs = np.array([model.score_samples(X) for model in self.models_]).T
- result = np.exp(logprobs + self.logpriors_)
- return result / result.sum(1, keepdims=True)
- def predict(self, X):
- return self.classes_[np.argmax(self.predict_proba(X), 1)]
Finally, the predict() method uses these probabilities and simply returns the class with the largest probability.
Using our custom estimator
Let’s try this custom estimator on a problem we have seen before: the classification of handwritten digits. Here we will load the digits, and compute the cross-validation score for a range of candidate bandwidths using the GridSearchCV meta-estimator (refer back to “Hyperparameters and Model Validation”for more information on this):
- from sklearn.datasets import load_digits
- from sklearn.model_selection import GridSearchCV
- digits = load_digits()
- bandwidths = 10 ** np.linspace(0, 2, 100)
- grid = GridSearchCV(KDEClassifier(), {'bandwidth': bandwidths})
- grid.fit(digits.data, digits.target)
- scores = [val for val in grid.cv_results_['mean_test_score']]
- plt.semilogx(bandwidths, scores)
- plt.xlabel('bandwidth')
- plt.ylabel('accuracy')
- plt.title('KDE Model Performance')
- print(grid.best_params_)
- print('accuracy =', grid.best_score_)
Figure 5-148. Validation curve for the KDE-based Bayesian classifier
We see that this not-so-naive Bayesian classifier reaches a cross-validation accuracy of just over 96%; this is compared to around 80% for the naive Bayesian classification:
- from sklearn.naive_bayes import GaussianNB
- from sklearn.model_selection import cross_val_score
- cross_val_score(GaussianNB(), digits.data, digits.target).mean()
One benefit of such a generative classifier is interpretability of results: for each unknown sample, we not only get a probabilistic classification, but a full model of the distribution of points we are comparing it to! If desired, this offers an intuitive window into the reasons for a particular classification that algorithms like SVMs and randomforests tend to obscure.
If you would like to take this further, there are some improvements that could be made to our KDE classifier model:
Finally, if you want some practice building your own estimator, you might tackle building a similar Bayesian classifier using Gaussian mixture models instead of KDE.
Supplement
* Windows Python Basemap Install 安裝教學
* Wiki - Bayes theorem
* FAQ - AttributeError : 'GridSearchCV' object has no attribute 'grid_scores_'
沒有留言:
張貼留言