Introducing Scikit-Learn
There are several Python libraries that provide solid implementations of a range of machine learning algorithms. One of the best known is Scikit-Learn, a package that provides efficient versions of a large number of common algorithms. Scikit-Learn is characterized by a clean, uniform, and streamlined API, as well as by very useful and complete online documentation. A benefit of this uniformity is that once you understand the basic use and syntax of Scikit-Learn for one type of model, switching to a new model or algorithm is very straightforward.
This section provides an overview of the Scikit-Learn API; a solid understanding of these API elements will form the foundation for understanding the deeper practical discussion of machine learning algorithms and approaches in the following chapters.
We will start by covering data representation in Scikit-Learn, followed by covering the Estimator API, and finally go through a more interesting example of using these tools for exploring a set of images of handwritten digits.
Data Representation in Scikit-Learn
Machine learning is about creating models from data: for that reason, we’ll start by discussing how data can be represented in order to be understood by the computer. The best way to think about data within Scikit-Learn is in terms of tables of data.
Data as table
A basic table is a two-dimensional grid of data, in which the rows represent individual elements of the dataset, and the columns represent quantities related to each of these elements. For example, consider the Iris dataset, famously analyzed by Ronald Fisher in 1936. We can download this dataset in the form of a Pandas DataFrame using the Seaborn library:
- import seaborn as sns
- iris = sns.load_dataset('iris')
- iris.head()
Here each row of the data refers to a single observed flower, and the number of rows is the total number of flowers in the dataset. In general, we will refer to the rows of the matrix as samples, and the number of rows as n_samples. Likewise, each column of the data refers to a particular quantitative piece of information that describes each sample. In general, we will refer to the columns of the matrix as features, and the number of columns as n_features.
Features matrix
This table layout makes clear that the information can be thought of as a two dimensional numerical array or matrix, which we will call the features matrix. By convention, this features matrix is often stored in a variable named X. The features matrix is assumed to be two-dimensional, with shape [n_samples, n_features], and is most often contained in a NumPy array or a Pandas DataFrame, though some Scikit-Learn models also accept SciPy sparse matrices.
The samples (i.e., rows) always refer to the individual objects described by the dataset. For example, the sample might be a flower, a person, a document, an image, a sound file, a video, an astronomical object, or anything else you can describe with a set of quantitative measurements; The features (i.e., columns) always refer to the distinct observations that describe each sample in a quantitative manner. Features are generally real-valued, but may be Boolean or discrete-valued in some cases.
Target array
In addition to the feature matrix X, we also generally work with a label or target array, which by convention we will usually call y. The target array is usually one dimensional, with length n_samples, and is generally contained in a NumPy array or Pandas Series. The target array may have continuous numerical values, or discrete classes/labels. While some Scikit-Learn estimators do handle multiple target values in the form of a two-dimensional [n_samples, n_targets] target array, we will primarily be working with the common case of a one-dimensional target array.
Often one point of confusion is how the target array differs from the other features columns. The distinguishing feature of the target array is that it is usually the quantity we want to predict from the data: in statistical terms, it is the dependent variable. For example, in the preceding data we may wish to construct a model that can predict the species of flower based on the other measurements; in this case, the species column would be considered the target.
With this target array in mind, we can use Seaborn (discussed earlier in “Visualization with Seaborn” on page 311) to conveniently visualize the data (see Figure 5-12):
Figure 5-12. A visualization of the Iris dataset
For use in Scikit-Learn, we will extract the features matrix and target array from the DataFrame, which we can do using some of the Pandas DataFrame operations discussed in Chapter 3:
- X_iris = iris.drop('species', axis=1)
- print('X_iris.shape={}'.format(X_iris.shape)) # X_iris.shape=(150, 4)
- y_iris = iris['species']
- print('y_iris.shape={}'.format(y_iris.shape)) # y_iris.shape=(150,)
Scikit-Learn’s Estimator API
The Scikit-Learn API is designed with the following guiding principles in mind, as outlined in the Scikit-Learn API paper:
* Consistency
* Inspection
* Limited object hierarchy
* Composition
* Sensible defaults
In practice, these principles make Scikit-Learn very easy to use, once the basic principles are understood. Every machine learning algorithm in Scikit-Learn is implemented via the Estimator API, which provides a consistent interface for a wide range of machine learning applications.
Basics of the API
Most commonly, the steps in using the Scikit-Learn estimator API are as follows (we will step through a handful of detailed examples in the sections that follow):
We will now step through several simple examples of applying supervised and unsupervised learning methods.
Supervised learning example: Simple linear regression
As an example of this process, let’s consider a simple linear regression—that is, the common case of fitting a line to x, y data. We will use the following simple data for our regression example (Figure 5-14):
- import matplotlib.pyplot as plt
- import numpy as np
- rng = np.random.RandomState(42)
- x = 10 * rng.rand(50)
- y = 2 * x - 1 + rng.randn(50)
- plt.scatter(x, y);
Figure 5-14. Data for linear regression
With this data in place, we can use the recipe outlined earlier. Let’s walk through the process:
1. Choose a class of model.
2. Choose model hyperparameters.
3. Arrange data into a features matrix and target vector.
4. Fit the model to your data.
5. Predict labels for unknown data.
Supervised learning example: Iris classification
Let’s take a look at another example of this process, using the Iris dataset we discussed earlier. Our question will be this: given a model trained on a portion of the Iris data, how well can we predict the remaining labels?
For this task, we will use an extremely simple generative model known as Gaussian naive Bayes, which proceeds by assuming each class is drawn from an axis-aligned Gaussian distribution (see “In Depth: Naive Bayes Classification” on page 382 for more details). Because it is so fast and has no hyperparameters to choose, Gaussian naive Bayes is often a good model to use as a baseline classification, before you explore whether improvements can be found through more sophisticated models.
We would like to evaluate the model on data it has not seen before, and so we will split the data into a training set and a testing set. This could be done by hand, but it is more convenient to use the train_test_split utility function:
- from sklearn.model_selection import train_test_split
- Xtrain, Xtest, ytrain, ytest = train_test_split(X_iris, y_iris, random_state=1)
- print('Xtrain.shape={}; Xtest.shape={}'.format(Xtrain.shape, Xtest.shape)) # Xtrain.shape=(112, 4); Xtest.shape=(38, 4)
- from sklearn.naive_bayes import GaussianNB # 1. choose model class
- model = GaussianNB() # 2. instantiate model
- model.fit(Xtrain, ytrain) # 3. fit model to data
- y_model = model.predict(Xtest) # 4. predict on new data
- from sklearn.metrics import accuracy_score
- accuracy_score(ytest, y_model) # 0.9736842105263158
Unsupervised learning example: Iris dimensionality
As an example of an unsupervised learning problem, let’s take a look at reducing the dimensionality of the Iris data so as to more easily visualize it. Recall that the Iris data is four dimensional: there are four features recorded for each sample.
The task of dimensionality reduction is to ask whether there is a suitable lowerdimensional representation that retains the essential features of the data. Often dimensionality reduction is used as an aid to visualizing data; after all, it is much easier to plot data in two dimensions than in four dimensions or higher!
Here we will use principal component analysis (PCA; see “In Depth: Principal Component Analysis” on page 433), which is a fast linear dimensionality reduction technique. We will ask the model to return two components—that is, a two-dimensional representation of the data.
Following the sequence of steps outlined earlier, we have:
- from sklearn.decomposition import PCA # 1. Choose the model class
- model = PCA(n_components=2) # 2. Instantiate the model with hyperparameters
- model.fit(X_iris) # 3. Fit to data. Notice y is not specified!
- X_2D = model.transform(X_iris) # 4. Transform the data to two dimensions
- iris['PCA1'] = X_2D[:, 0]
- iris['PCA2'] = X_2D[:, 1]
- sns.lmplot("PCA1", "PCA2", hue='species', data=iris, fit_reg=False);
Figure 5-16. The Iris data projected to two dimensions
Unsupervised learning: Iris clustering
Let’s next look at applying clustering to the Iris data. A clustering algorithm attempts to find distinct groups of data without reference to any labels. Here we will use a powerful clustering method called a Gaussian mixture model (GMM), discussed in more detail in “In Depth: Gaussian Mixture Models” on page 476. A GMM attempts to model the data as a collection of Gaussian blobs.
We can fit the Gaussian mixture model as follows:
- from sklearn.mixture import GaussianMixture # 1. Choose the model class
- model = GaussianMixture(n_components=3, covariance_type='full' # 2. Instantiate the model w/ hyperparameters
- model.fit(X_iris) # 3. Fit to data. Notice y is not specified!
- y_gmm = model.predict(X_iris) # 4. Determine cluster labels
- iris['cluster'] = y_gmm
- sns.lmplot("PCA1", "PCA2", data=iris, hue='species', col='cluster', fit_reg=False);
Figure 5-17. k-means clusters within the Iris data
By splitting the data by cluster number, we see exactly how well the GMM algorithm has recovered the underlying label: the setosa species is separated perfectly within cluster 0, while there remains a small amount of mixing between versicolor and virginica. This means that even without an expert to tell us the species labels of the individual flowers, the measurements of these flowers are distinct enough that we could automatically identify the presence of these different groups of species with a simple clustering algorithm! This sort of algorithm might further give experts in the field clues as to the relationship between the samples they are observing.
Application: Exploring Handwritten Digits
To demonstrate these principles on a more interesting problem, let’s consider one piece of the optical character recognition problem: the identification of handwritten digits. In the wild, this problem involves both locating and identifying characters in an image. Here we’ll take a shortcut and use Scikit-Learn’s set of preformatted digits, which is built into the library.
Loading and visualizing the digits data
We’ll use Scikit-Learn’s data access interface and take a look at this data:
- from sklearn.datasets import load_digits
- digits = load_digits()
- print(digits.images.shape) # (1797, 8, 8)
- import matplotlib.pyplot as plt
- fig, axes = plt.subplots(10, 10, figsize=(8, 8),
- subplot_kw={'xticks':[], 'yticks':[]},
- gridspec_kw=dict(hspace=0.1, wspace=0.1))
- for i, ax in enumerate(axes.flat):
- ax.imshow(digits.images[i], cmap='binary', interpolation='nearest')
- ax.text(0.05, 0.05, str(digits.target[i]), transform=ax.transAxes, color='green')
Figure 5-18. The handwritten digits data; each sample is represented by one 8×8 grid of pixels
In order to work with this data within Scikit-Learn, we need a two-dimensional, [n_samples, n_features] representation. We can accomplish this by treating each pixel in the image as a feature—that is, by flattening out the pixel arrays so that we have a length-64 array of pixel values representing each digit. Additionally, we need the target array, which gives the previously determined label for each digit. These two quantities are built into the digits dataset under the data and target attributes, respectively:
- X = digits.data
- y = digits.target
- print('X.shape={}; y.shape={}'.format(X.shape, y.shape)) # X.shape=(1797, 64); y.shape=(1797,)
Unsupervised learning: Dimensionality reduction
We’d like to visualize our points within the 64-dimensional parameter space, but it’s difficult to effectively visualize points in such a high-dimensional space. Instead we’ll reduce the dimensions to 2, using an unsupervised method. Here, we’ll make use of a manifold learning algorithm called Isomap (see “In-Depth: Manifold Learning” on page 445), and transform the data to two dimensions:
- from sklearn.manifold import Isomap
- iso = Isomap(n_components=2)
- iso.fit(digits.data)
- data_projected = iso.transform(digits.data)
- print("data_projected.shape={}".format(data_projected.shape)) # data_projected.shape=(1797, 2)
- plt.rcParams["figure.figsize"] = (15, 7)
- plt.scatter(data_projected[:, 0], data_projected[:, 1], c=digits.target,
- edgecolor='none', alpha=0.7, cmap=plt.cm.get_cmap('seismic', 10))
- plt.colorbar(label='digit label', ticks=range(10))
- plt.clim(-0.5, 9.5);
Figure 5-19. An Isomap embedding of the digits data
This plot gives us some good intuition into how well various numbers are separated in the larger 64-dimensional space. For example, zeros (in dark blue) and seven (in red) have very little overlap in parameter space. Intuitively, this makes sense: a zero is empty in the middle of the image, while a seven will generally have ink in the middle. On the other hand, there seems to be a more or less continuous spectrum between twos and sevens while they some how look quite similar.
Overall, however, the different groups appear to be fairly well separated in the parameter space: this tells us that even a very straightforward supervised classification algorithm should perform suitably on this data. Let’s give it a try.
Classification on digits
Let’s apply a classification algorithm to the digits. As with the Iris data previously, we will split the data into a training and test set, and fit a Gaussian naive Bayes model:
- from sklearn.naive_bayes import GaussianNB
- Xtrain, Xtest, ytrain, ytest = train_test_split(X, y, random_state=0)
- model = GaussianNB()
- model.fit(Xtrain, ytrain)
- y_model = model.predict(Xtest)
- from sklearn.metrics import accuracy_score
- print("accuracy={:.02f}%".format(accuracy_score(ytest, y_model) * 100)) # accuracy=83.33%
- from sklearn.metrics import confusion_matrix
- mat = confusion_matrix(ytest, y_model)
- sns.heatmap(mat, square=True, annot=True, cbar=True, cmap='gist_heat')
- plt.xlabel('predicted value')
- plt.ylabel('true value');
Figure 5-20. A confusion matrix showing the frequency of misclassifications by our classifier
This shows us where the mislabeled points tend to be: for example, a large number of twos here are misclassified as either ones or eights. Another way to gain intuition into the characteristics of the model is to plot the inputs again, with their predicted labels. We’ll use green for correct labels, and red for incorrect labels (Figure 5-21):
- fig, axes = plt.subplots(10, 10, figsize=(8, 8), subplot_kw={'xticks':[], 'yticks':[]},
- gridspec_kw=dict(hspace=0.1, wspace=0.1))
- for i, ax in enumerate(axes.flat):
- ax.imshow(digits.images[i], cmap='binary', interpolation='nearest')
- ax.text(0.05, 0.05, str(y_model[i]),
- transform=ax.transAxes, color='green' if (ytest[i] == y_model[i]) else 'red')
Figure 5-21. Data showing correct (green) and incorrect (red) labels
Examining this subset of the data, we can gain insight regarding where the algorithm might not be performing optimally. To go beyond our 80% classification rate, we might move to a more sophisticated algorithm, such as support vector machines (see “In-Depth: Support Vector Machines” on page 405) or random forests (see “In-Depth: Decision Trees and Random Forests” on page 421), or another classification approach.
Summary
In this section we have covered the essential features of the Scikit-Learn data representation, and the estimator API. Regardless of the type of estimator, the same import/instantiate/fit/predict pattern holds. Armed with this information about the estimator API, you can explore the Scikit-Learn documentation and begin trying out various models on your data.
In the next section, we will explore perhaps the most important topic in machine learning: how to select and validate your model.
Supplement
* Scikit-learn doc - Preprocessing data
* Scikit-learn doc - Choosing Colormaps in Matplotlib
沒有留言:
張貼留言