Hyperparameters and Model Validation
In the previous section, we saw the basic recipe for applying a supervised machine learning model:
The first two pieces of this—the choice of model and choice of hyperparameters—are perhaps the most important part of using these tools and techniques effectively. In order to make an informed choice, we need a way to validate that our model and our hyperparameters are a good fit to the data. While this may sound simple, there are some pitfalls that you must avoid to do this effectively.
Thinking About Model Validation
In principle, model validation is very simple: after choosing a model and its hyperparameters, we can estimate how effective it is by applying it to some of the training data and comparing the prediction to the known value. The following sections first show a naive approach to model validation and why it fails, before exploring the use of holdout sets and cross-validation for more robust model evaluation.
Model validation the wrong way
Let’s demonstrate the naive approach to validation using the Iris data, which we saw in the previous section. We will start by loading the data:
- from sklearn.datasets import load_iris
- iris = load_iris()
- X = iris.data
- y = iris.target
- from sklearn.neighbors import KNeighborsClassifier
- model = KNeighborsClassifier(n_neighbors=1)
- model.fit(X, y)
- y_model = model.predict(X)
- from sklearn.metrics import accuracy_score
- accuracy_score(y, y_model) # 1.0
As you may have gathered, the answer is no. In fact, this approach contains a fundamental flaw: it trains and evaluates the model on the same data. Furthermore, the nearest neighbor model is an instance-based estimator that simply stores the training data, and predicts labels by comparing new data to these stored points; except in contrived cases, it will get 100% accuracy every time!
Model validation the right way: Holdout sets
So what can be done? We can get a better sense of a model’s performance using what’s known as a holdout set; that is, we hold back some subset of the data from the training of the model, and then use this holdout set to check the model performance. We can do this splitting using the train_test_split utility in Scikit-Learn:
- from sklearn.model_selection import train_test_split
- # split the data with 50% in each set
- X1, X2, y1, y2 = train_test_split(X, y, random_state=0, train_size=0.5)
- # fit the model on one set of data
- model.fit(X1, y1)
- # evaluate the model on the second set of data
- y2_model = model.predict(X2)
- accuracy_score(y2, y2_model) # 0.9066666666666666
Model validation via cross-validation
One disadvantage of using a holdout set for model validation is that we have lost a portion of our data to the model training. In the previous case, half the dataset does not contribute to the training of the model! This is not optimal, and can cause problems— especially if the initial set of training data is small.
One way to address this is to use cross-validation—that is, to do a sequence of fits where each subset of the data is used both as a training set and as a validation set. Visually, it might look something like Figure 5-22:
Figure 5-22. Visualization of two-fold cross-validation
- y2_model = model.fit(X1, y1).predict(X2)
- y1_model = model.fit(X2, y2).predict(X1)
- accuracy_score(y1, y1_model), accuracy_score(y2, y2_model) # (0.96, 0.9066666666666666)
We could expand on this idea to use even more trials, and more folds in the data—for example, Figure 5-23 is a visual depiction of five-fold cross-validation.
Figure 5-23. Visualization of five-fold cross-validation
Here we split the data into five groups, and use each of them in turn to evaluate the model fit on the other 4/5 of the data. This would be rather tedious to do by hand, and so we can use Scikit-Learn’s cross_val_score convenience routine to do it succinctly:
- from sklearn.model_selection import cross_val_score
- cross_val_score(model, X, y, cv=5) # array([0.96666667, 0.96666667, 0.93333333, 0.93333333, 1. ])
Scikit-Learn implements a number of cross-validation schemes that are useful in particular situations; these are implemented via iterators in the cross_validation module. For example, we might wish to go to the extreme case in which our number of folds is equal to the number of data points; that is, we train on all points but one in each trial. This type of cross-validation is known as leave-one-out cross-validation, and can be used as follows:
- from sklearn.model_selection import LeaveOneOut
- scores = cross_val_score(model, X, y, cv=LeaveOneOut())
- print(scores.shape) # (150,)
- scores.mean() # 0.96
Selecting the Best Model
Now that we’ve seen the basics of validation and cross-validation, we will go into a little more depth regarding model selection and selection of hyperparameters. These issues are some of the most important aspects of the practice of machine learning, and I find that this information is often glossed over in introductory machine learning tutorials.
Of core importance is the following question: if our estimator is underperforming, how should we move forward? There are several possible answers:
The answer to this question is often counterintuitive. In particular, sometimes using a more complicated model will give worse results, and adding more training samples may not improve your results! The ability to determine what steps will improve your model is what separates the successful machine learning practitioners from the unsuccessful.
The bias–variance trade-off
Fundamentally, the question of “the best model” is about finding a sweet spot in the trade-off between bias and variance. Consider Figure 5-24, which presents two regression fits to the same dataset.
Figure 5-24. A high-bias and high-variance regression model
It is clear that neither of these models is a particularly good fit to the data, but they fail in different ways.
The model on the left attempts to find a straight-line fit through the data. Because the data are intrinsically more complicated than a straight line, the straight-line model will never be able to describe this dataset well. Such a model is said to underfit the data; that is, it does not have enough model flexibility to suitably account for all the features in the data. Another way of saying this is that the model has high bias.
The model on the right attempts to fit a high-order polynomial through the data. Here the model fit has enough flexibility to nearly perfectly account for the fine features in the data, but even though it very accurately describes the training data, its precise form seems to be more reflective of the particular noise properties of the data rather than the intrinsic properties of whatever process generated that data. Such a model is said to overfit the data; that is, it has so much model flexibility that the model ends up accounting for random errors as well as the underlying data distribution. Another way of saying this is that the model has high variance.
To look at this in another light, consider what happens if we use these two models to predict the y-value for some new data. In diagrams in Figure 5-25, the red/lighter points indicate data that is omitted from the training set.
Figure 5-25. Training and validation scores in high-bias and high-variance models
The score here is the R^2 score, or coefficient of determination, which measures how well a model performs relative to a simple mean of the target values. R^2 = 1 indicates a perfect match, R^2 = 0 indicates the model does no better than simply taking the mean of the data, and negative values mean even worse models. From the scores associated with these two models, we can make an observation that holds more generally:
If we imagine that we have some ability to tune the model complexity, we would expect the training score and validation score to behave as illustrated in Figure 5-26.
Figure 5-26. A schematic of the relationship between model complexity, training score, and validation score
The diagram shown in Figure 5-26 is often called a validation curve, and we see the following essential features:
The means of tuning the model complexity varies from model to model; when we discuss individual models in depth in later sections, we will see how each model allows for such tuning.
Validation curves in Scikit-Learn
Let’s look at an example of using cross-validation to compute the validation curve for a class of models. Here we will use a polynomial regression model: this is a generalized linear model in which the degree of the polynomial is a tunable parameter. For example, a degree-1 polynomial fits a straight line to the data; for model parameters a and b:
A degree-3 polynomial fits a cubic curve to the data; for model parameters a, b, c, d:
We can generalize this to any number of polynomial features. In Scikit-Learn, we can implement this with a simple linear regression combined with the polynomial preprocessor. We will use a pipeline to string these operations together (we will discuss polynomial features and pipelines more fully in "Feature Engineering".):
- from sklearn.preprocessing import PolynomialFeatures
- from sklearn.linear_model import LinearRegression
- from sklearn.pipeline import make_pipeline
- def PolynomialRegression(degree=2, **kwargs):
- return make_pipeline(PolynomialFeatures(degree),
- LinearRegression(**kwargs))
- import numpy as np
- def make_data(N, err=1.0, rseed=1):
- # randomly sample the data
- rng = np.random.RandomState(rseed)
- X = rng.rand(N, 1) ** 2
- y = 10 - 1. / (X.ravel() + 0.1)
- if err > 0:
- y += err * rng.randn(N)
- return X, y
- X, y = make_data(40)
- %matplotlib inline
- import matplotlib.pyplot as plt
- import seaborn; seaborn.set() # plot formatting
- plt.rcParams['figure.figsize'] = [10, 5]
- X_test = np.linspace(-0.1, 1.1, 500)[:, None]
- plt.scatter(X.ravel(), y, color='black')
- axis = plt.axis()
- for degree in [1, 3, 5]:
- y_test = PolynomialRegression(degree).fit(X, y).predict(X_test)
- plt.plot(X_test.ravel(), y_test, label='degree={0}'.format(degree))
- plt.xlim(-0.1, 1.0)
- plt.ylim(-2, 12)
- plt.legend(loc='best');
Figure 5-27. Three different polynomial models fit to a dataset
The knob controlling model complexity in this case is the degree of the polynomial, which can be any non-negative integer. A useful question to answer is this: what degree of polynomial provides a suitable trade-off between bias (underfitting) and variance (overfitting)?
We can make progress in this by visualizing the validation curve for this particular data and model; we can do this straightforwardly using the validation_curve convenience routine provided by Scikit-Learn. Given a model, data, parameter name, and a range to explore, this function will automatically compute both the training score and validation score across the range (Figure 5-28):
- from sklearn.model_selection import validation_curve
- degree = np.arange(0, 21)
- train_score, val_score = validation_curve(PolynomialRegression(), X, y, 'polynomialfeatures__degree', degree, cv=7)
- plt.plot(degree, np.median(train_score, 1), color='blue', label='training score')
- plt.plot(degree, np.median(val_score, 1), color='red', label='validation score')
- plt.legend(loc='best')
- plt.ylim(0, 1)
- plt.xlabel('degree')
- plt.ylabel('score');
Figure 5-28. The validation curves for the data in Figure 5-27
This shows precisely the qualitative behavior we expect: the training score is everywhere higher than the validation score; the training score is monotonically improving with increased model complexity; and the validation score reaches a maximum before dropping off as the model becomes overfit. From the validation curve, we can read off that the optimal trade-off between bias and variance is found for a third-order polynomial; we can compute and display this fit over the original data as follows (Figure 5-29):
- plt.scatter(X.ravel(), y)
- lim = plt.axis()
- y_test = PolynomialRegression(3).fit(X, y).predict(X_test)
- plt.plot(X_test.ravel(), y_test);
- plt.axis(lim);
Figure 5-29. The cross-validated optimal model for the data in Figure 5-27
Notice that finding this optimal model did not actually require us to compute the training score, but examining the relationship between the training score and validation score can give us useful insight into the performance of the model.
Learning Curves
One important aspect of model complexity is that the optimal model will generally depend on the size of your training data. For example, let’s generate a new dataset with a factor of five more points (Figure 5-30):
- X2, y2 = make_data(200)
- plt.scatter(X2.ravel(), y2);
Figure 5-30. Data to demonstrate learning curves
We will duplicate the preceding code to plot the validation curve for this larger dataset; for reference let’s over-plot the previous results as well (Figure 5-31):
- degree = np.arange(21)
- train_score2, val_score2 = validation_curve(PolynomialRegression(), X2, y2, 'polynomialfeatures__degree', degree, cv=7)
- plt.plot(degree, np.median(train_score2, 1), color='blue', label='training score')
- plt.plot(degree, np.median(val_score2, 1), color='red', label='validation score')
- plt.plot(degree, np.median(train_score, 1), color='blue', alpha=0.3, linestyle='dashed')
- plt.plot(degree, np.median(val_score, 1), color='red', alpha=0.3, linestyle='dashed')
- plt.legend(loc='lower center')
- plt.ylim(0, 1)
- plt.xlabel('degree')
- plt.ylabel('score');
Figure 5-31. Learning curves for the polynomial model fit to data in Figure 5-30
The solid lines show the new results, while the fainter dashed lines show the results of the previous smaller dataset. It is clear from the validation curve that the larger dataset can support a much more complicated model: the peak here is probably around a degree of 6, but even a degree-20 model is not seriously overfitting the data—the validation and training scores remain very close.
Thus we see that the behavior of the validation curve has not one, but two, important inputs: the model complexity and the number of training points. It is often useful to explore the behavior of the model as a function of the number of training points, which we can do by using increasingly larger subsets of the data to fit our model. A plot of the training/validation score with respect to the size of the training set is known as a learning curve.
The general behavior we would expect from a learning curve is this:
With these features in mind, we would expect a learning curve to look qualitatively like that shown in Figure 5-32.
Figure 5-32. Schematic showing the typical interpretation of learning curves
The notable feature of the learning curve is the convergence to a particular score as the number of training samples grows. In particular, once you have enough points that a particular model has converged, adding more training data will not help you! The only way to increase model performance in this case is to use another (often more complex) model.
Learning curves in Scikit-Learn
Scikit-Learn offers a convenient utility for computing such learning curves from your models; here we will compute a learning curve for our original dataset with a second order polynomial model and a ninth-order polynomial (Figure 5-33):
- from sklearn.model_selection import learning_curve
- fig, ax = plt.subplots(1, 2, figsize=(15, 6))
- fig.subplots_adjust(left=0.0625, right=0.95, wspace=0.1)
- for i, degree in enumerate([2, 9]):
- N, train_lc, val_lc = learning_curve(PolynomialRegression(degree), X, y, cv=7, train_sizes=np.linspace(0.3, 1, 25))
- ax[i].plot(N, np.mean(train_lc, 1), color='blue', label='training score')
- ax[i].plot(N, np.mean(val_lc, 1), color='red', label='validation score')
- ax[i].hlines(np.mean([train_lc[-1], val_lc[-1]]), N[0], N[-1], color='gray', linestyle='dashed')
- ax[i].set_ylim(0, 1)
- ax[i].set_xlim(N[0], N[-1])
- ax[i].set_xlabel('training size')
- ax[i].set_ylabel('score')
- ax[i].set_title('degree = {0}'.format(degree), size=14)
- ax[i].legend(loc='best')
Figure 5-33. Learning curves for a low-complexity model (left) and a high-complexity model (right)
This is a valuable diagnostic, because it gives us a visual depiction of how our model responds to increasing training data. In particular, when your learning curve has already converged (i.e., when the training and validation curves are already close to each other), adding more training data will not significantly improve the fit! This situation is seen in the left panel, with the learning curve for the degree-2 model.
The only way to increase the converged score is to use a different (usually more complicated) model. We see this in the right panel: by moving to a much more complicated model, we increase the score of convergence (indicated by the dashed line), but at the expense of higher model variance (indicated by the difference between the training and validation scores). If we were to add even more data points, the learning curve for the more complicated model would eventually converge.
Plotting a learning curve for your particular choice of model and dataset can help you to make this type of decision about how to move forward in improving your analysis.
Validation in Practice: Grid Search
The preceding discussion is meant to give you some intuition into the trade-off between bias and variance, and its dependence on model complexity and training set size. In practice, models generally have more than one knob to turn, and thus plots of validation and learning curves change from lines to multidimensional surfaces. In these cases, such visualizations are difficult and we would rather simply find the particular model that maximizes the validation score.
Scikit-Learn provides automated tools to do this in the grid_search module. Here is an example of using grid search to find the optimal polynomial model. We will explore a three-dimensional grid of model features—namely, the polynomial degree, the flag telling us whether to fit the intercept, and the flag telling us whether to normalize the problem. We can set this up using Scikit-Learn’s GridSearchCV metaestimator:
- from sklearn.model_selection import GridSearchCV
- param_grid = {'polynomialfeatures__degree': np.arange(21),
- 'linearregression__fit_intercept': [True, False],
- 'linearregression__normalize': [True, False]}
- grid = GridSearchCV(PolynomialRegression(), param_grid, cv=7)
- grid.fit(X, y);
- grid.best_params_
Finally, if we wish, we can use the best model and show the fit to our data using code from before (Figure 5-34):
- model = grid.best_estimator_
- plt.scatter(X.ravel(), y)
- lim = plt.axis()
- y_test = model.fit(X, y).predict(X_test)
- plt.plot(X_test.ravel(), y_test);
- plt.axis(lim);
Figure 5-34. The best-fit model determined via an automatic grid-search
The grid search provides many more options, including the ability to specify a custom scoring function, to parallelize the computations, to do randomized searches, and more. For information, see the examples in “In-Depth: Kernel Density Estimation” on page 491 and “Application: A Face Detection Pipeline” on page 506, or refer to Scikit-Learn’s grid search documentation.
Supplement
* PythonDataScienceHandbook/notebooks/05.03-Hyperpar...ers-and-Model-Validation.ipynb
沒有留言:
張貼留言