2017年1月30日 星期一

[ Intro2ML ] Ch2. Supervised Learning - Linear models

Linear models 
Linear models are a class of models that are widely used in practice, and have been studied extensively in the last few decades, with roots going back over a hundred years. Linear models are models that make a prediction that using a linear function of the input features, which we will explain below. 

Linear models for regression 
For regression, the general prediction formula for a linear model looks as follows: 
y = w[0] x[0] + w[1] x[1] + ... + w[p] x[p] + b ... Formula 1

Here, x[0] to x[p] denotes the features (here the number of features is p) of a single data point, w and b are parameters of the model that are learned, and y is the prediction the model makes. For a dataset with a single feature, this is which you might remember as the equation for a line from high school mathematics. 

Here, w[0] is the slope, and b is the y-axis offset. For more features, w contains the slopes along each feature axis. Alternatively, you can think of the predicted response as being a weighted sum of the input features, with weights (which can be negativegiven by the entries of w. Trying to learn the parameters w[0] and b on our one-dimensional wave dataset might lead to the following line: 
  1. mglearn.plots.plot_linear_regression_wave()  

We added a coordinate cross into the plot to make it easier to understand the line. Looking at w[0] we see that the slope should be roughly around .4, which we can confirm visually in the plot above. The intercept is where the prediction line should cross the y-axis, which is slightly below 0, which you can also confirm in the image. Linear models for regression can be characterized as regression models for which the prediction is a line for a single feature, a plane when using two features, or a hyper plane in higher dimensions (that is when having more features). 

If you compare the predictions made by the red line with those made by the KNeighborsRegressor in Figure nearest_neighbor_regression, using a straight line to make predictions seems very restrictive. It looks like all the fine details of the data are lost. In a sense this is true. It is a strong (and somewhat unrealistic) assumption that our target y is a linear combination of the features. But looking at one-dimensional data gives a somewhat skewed perspective. For datasets with many features, linear models can be very powerful. In particular, if you have more features than training data points, any target y can be perfectly modeled (on the training set) as a linear function (FOOTNOTE This is easy to see if you know some linear algebra). 

There are many different linear models for regression. The difference between these models is how the model parameters w and b are learned from the training data, and how model complexity can be controlled. We will now go through the most popular linear models for regression. 

Linear Regression aka Ordinary Least Squares 
Linear regression or Ordinary Least Squares (OLS) is the simplest and most classic linear method for regression. Linear regression finds the parameters w and b that minimize the mean squared error between predictions and the true regression targets y on the training set. The mean squared error is the sum of the squared differences between the predictions and the true values. Linear regression has no parameters, which is a benefit, but it also has no way to control model complexity. Here is the code that produces the model you can see in above figure: 
- ch2_t4.py 
  1. import mglearn  
  2. from sklearn.model_selection import train_test_split  
  3. from sklearn.linear_model import LinearRegression  
  4.   
  5. X,y = mglearn.datasets.make_wave(n_samples=60)  
  6. X_train, X_test, y_train, y_test = train_test_split(X, y, random_state=42)  
  7. lr = LinearRegression().fit(X_train, y_train)  
The “slope” parameters w, also called weights or coefficients are stored in the coef_ attribute, while the offset or intercept b is stored in the intercept_ attribute. (Footnote: you might notice the strange-looking trailing underscore. Scikit-learn always stores anything that is derived from the training data in attributes that end with a trailing underscore. That is to separate them from parameters that are set by the user.
>>> from ch2_t4 import *
>>> lr
LinearRegression(copy_X=True, fit_intercept=True, n_jobs=1, normalize=False)
>>> lr.coef_
array([ 0.39390555])
>>> lr.intercept_
-0.031804343026759746

The intercept_ attribute is always a single float number, while the coef_ attribute is a numpy array with one entry per input feature. As we only have a single input feature in the wave dataset, lr.coef_ only has a single entry. Let’s look at the training set and test set performance: 
>>> print "Training set score: %f" % (lr.score(X_train, y_train))
Training set score: 0.670089
>>> print "Test set score: %f" % (lr.score(X_test, y_test))
Test set score: 0.659337

An R^2 of around .66 is not very good, but we can see that the score on training and test set are very close together. This means we are likely underfitting, not overfitting. For this one-dimensional dataset, there is little danger of overfitting, as the model is very simple (or restricted). However, with higher dimensional datasets (meaning a large number of features), linear models become more powerful, and there is a higher chance of overfitting

Let’s take a look at how LinearRegression performs on a more complex dataset, like the Boston Housing dataset. Remember that this dataset has 506 samples and 105 derived features. We load the dataset and split it into a training and a test set. Then we build the linear regression model as before: 
  1. import mglearn  
  2.   
  3. from sklearn.model_selection import train_test_split  
  4. from sklearn.linear_model import LinearRegression  
  5.   
  6. X, y = mglearn.datasets.load_extended_boston()  
  7. X_train, X_test, y_train, y_test = train_test_split(X, y, random_state=42)  
  8. lr = LinearRegression().fit(X_train, y_train)  
  9. print "Coefficients: \n%s" %  lr.coef_  
  10. print "Intercept: %f" % lr.intercept_  
  11. print("training set score: %f" % lr.score(X_train, y_train))  
  12. print("test set score: %f" % lr.score(X_test, y_test))  
When comparing training set and test set score, we find that we predict very accurately on the training set, but the R^2 on the test set is much worse: 
training set score: 0.944773
test set score: 0.791206

This is a clear sign of overfitting, and therefore we should try to find a model that allows us to control complexity. One of the most commonly used alternatives to standard linear regression is Ridge regression, which we will look into next. 

Ridge regression 
Ridge regression is also a linear model for regression, so the formula it uses to make predictions is still Formula (1), as for ordinary least squares. In Ridge regression,the coefficients w are chosen not only so that they predict well on the training data, but there is an additional constraint. We also want the magnitude of coefficients to be as small as possible; in other words, all entries of w should be close to 0. 

Intuitively, this means each feature should have as little effect on the outcome as possible (which translates to having a small slope), while still predicting well. This constraint is an example of what is called regularizationRegularization means explicitly restricting a model to avoid overfitting. The particular kind used by Ridge regression is known as l2 regularization. (footnote: Mathematically, Ridge penalizes the l2 norm of the coefficients, or the Euclidean length of w.) 

Ridge regression is implemented in linear_model.Ridge. Let’s see how well it does on the extended Boston dataset: 
  1. import mglearn  
  2.   
  3. from sklearn.model_selection import train_test_split  
  4. from sklearn.linear_model import Ridge  
  5.   
  6. X, y = mglearn.datasets.load_extended_boston()  
  7. X_train, X_test, y_train, y_test = train_test_split(X, y, random_state=42)  
  8. ridge = Ridge().fit(X_train, y_train)  
  9. print "Coefficients: \n%s" %  ridge.coef_  
  10. print "Intercept: %f" % ridge.intercept_  
  11.   
  12. print("training set score: %f" % ridge.score(X_train, y_train))  
  13. print("test set score: %f" % ridge.score(X_test, y_test))  
Execution output: 
...
training set score: 0.870375
test set score: 0.814100

As you can see, the training set score of Ridge is lower than for LinearRegression, while the test set score is higher. This is consistent with our expectation. With linear regression, we were overfitting to our data. Ridge is a more restricted model, so we are less likely to overfit. A less complex model means worse performance on the training set, but better generalization. 

As we are only interested in generalization performance, we should choose the Ridge model over the LinearRegression model. 

The Ridge model makes a trade-off between the simplicity of the model (near zero coefficients) and its performance on the training set. How much importance the model places on simplicity versus training set performance can be specified by the user, using the alpha parameter. Above, we used the default parameter alpha=1.0. There is no reason why this would give us the best trade-off, though. Increasing alpha forces coefficients to move more towards zero, which decreases training set performance, but might help generalization
>>> ridge10 = Ridge(alpha=10).fit(X_train, y_train)
>>> print("Training set score: %f" % (ridge10.score(X_train, y_train)))
Training set score: 0.767050
>>> print("Test set score: %f" % ridge10.score(X_test, y_test))
Test set score: 0.727757

Decreasing alpha allows the coefficients to be less restricted, meaning we move right on the figure model_complexity. 
Figure model_complexity 

For very small values of alpha, coefficients are barely restricted at all, and we end up with a model that resembles LinearRegression
>>> ridge01 = Ridge(alpha=0.1).fit(X_train, y_train)
>>> print("Training set score: %f" % (ridge01.score(X_train, y_train)))
Training set score: 0.917736
>>> print("Test set score: %f" % ridge01.score(X_test, y_test))
Test set score: 0.824025

Here, alpha=0.1 seems to be working well. We could try decreasing alpha even more to impair generalization. For now, notice how the parameter alpha corresponds to the model complexity as shown in Figure model_complexity. We will discuss methods to properly select parameters in Chapter 6 (Model Selection). 

We can also get a more qualitative insight into how the alpha parameter changes the model by inspecting the coef_ attribute of models with different values of alpha. A higher alpha means a more restricted model, so we expect that the entries of coef_ have smaller magnitude for a high value of alpha than for a low value of alpha. This is confirmed in the plot below: 
- ch2_t7.py 
  1. import mglearn  
  2.   
  3. from sklearn.model_selection import train_test_split  
  4. from sklearn.linear_model import LinearRegression  
  5. from sklearn.linear_model import Ridge  
  6.   
  7. X, y = mglearn.datasets.load_extended_boston()  
  8. X_train, X_test, y_train, y_test = train_test_split(X, y, random_state=42)  
  9. lr = LinearRegression().fit(X_train, y_train)  
  10. ridge = Ridge().fit(X_train, y_train)  
  11. ridge01 = Ridge(alpha=0.1).fit(X_train, y_train)  
  12. ridge10 = Ridge(alpha=10).fit(X_train, y_train)  
  13.   
  14.   
  15. import os  
  16. dmode = os.environ.get('DISPLAY''')  
  17.   
  18. if dmode:  
  19.     import matplotlib.pyplot as plt  
  20.     import numpy as np  
  21.     plt.title("ridge_coefficients")  
  22.     plt.plot(ridge.coef_, 'o', label='Ridge alpha=1')  
  23.     plt.plot(ridge10.coef_, 'o', label='Ridge alpha=10')  
  24.     plt.plot(ridge01.coef_, 'o', label='Ridge alpha=0.1')  
  25.     plt.plot(lr.coef_, 'o', label='LinearRegression')  
  26.     plt.ylim(-2525)  
  27.     plt.legend()  
  28.     plt.show()  
Figure ridge_coefficients 

Here, the x-axis enumerates the entries of coef_: x=0 shows the coefficient associated with the first feature, x=1 the coefficient associated with the second feature, and so on up to x=100. The y-axis shows the numeric value of the corresponding value of the coefficient. The main take-away here is that for alpha=10 (as shown by the green dots), the coefficients are mostly between around -3 and 3. The coefficients for the ridge model with alpha=1 (as shown by the blue dots), are somewhat larger. The red dots have larger magnitude still corresponding to linear regression without any regularization (which would be alpha=0) are so large they are even outside of the chart. 

Lasso 
An alternative to Ridge for regularizing linear regression is the Lasso. The lasso also restricts coefficients to be close to zero, similarly to Ridge regression, but in a slightly different way, called “l1” regularization. (footnote: The Lasso penalizes the l1 norm of the coefficient vector, or in other words the sum of the absolute values of the coefficients). 

The consequence of l1 regularization is that when using the Lasso, some coefficients are exactly zero. This means some features are entirely ignored by the model. This can be seen as a form of automatic feature selection. Having some coefficients be exactly zero often makes a model easier to interpret, and can reveal the most important features of your model. 

Let’s apply the lasso to the extended Boston housing dataset: 
- ch2_t8.py 
  1. import mglearn  
  2.   
  3. from sklearn.model_selection import train_test_split  
  4. from sklearn.linear_model import Lasso  
  5.   
  6. X, y = mglearn.datasets.load_extended_boston()  
  7. X_train, X_test, y_train, y_test = train_test_split(X, y, random_state=42)  
  8. lasso = Lasso().fit(X_train, y_train)  
  9. print "Coefficients: \n%s" %  lasso.coef_  
  10. print "Intercept: %f" % lasso.intercept_  
  11.   
  12. print("training set score: %f" % lasso.score(X_train, y_train))  
  13. print("test set score: %f" % lasso.score(X_test, y_test))  
The execution output: 
...
training set score: 0.267838
test set score: 0.259923

As you can see, the Lasso does quite badly, both on the training and the test set. This indicates that we are underfitting. We find that it only used three of the 105 features: 

Similarly to Ridge, the Lasso also has a regularization parameter alpha that controls how strongly coefficients are pushed towards zero . Above, we used the default of alpha=1.0. To diminish underfitting, let’s try decreasing alpha: 
- ch2_t9.py 
  1. import mglearn  
  2. import numpy as np  
  3. from sklearn.neighbors import KNeighborsRegressor  
  4. from sklearn.model_selection import train_test_split  
  5. from sklearn.linear_model import LinearRegression  
  6. from sklearn.linear_model import Ridge  
  7. from sklearn.linear_model import Lasso  
  8.   
  9. X, y = mglearn.datasets.load_extended_boston()  
  10. X_train, X_test, y_train, y_test = train_test_split(X, y, random_state=42)  
  11. lasso001 = Lasso(alpha=0.01).fit(X_train, y_train)  
  12. print "Coefficients: \n%s" %  lasso001.coef_  
  13. print "Intercept: %f" % lasso001.intercept_  
  14. print "Number of features used: %d" % np.sum(lasso001.coef_ !=0)  
  15.   
  16. print("training set score: %f" % lasso001.score(X_train, y_train))  
  17. print("test set score: %f" % lasso001.score(X_test, y_test))  
The execution output: 

A lower alpha allowed us to fit a more complex model, which worked better on the training and the test data. The performance is slightly better than using Ridge, and we are using only 34 of the 105 features. This makes this model potentially easier to understand. If we set alpha too low, we again remove the effect of regularization and end up with a result similar to LinearRegression
>>> from ch2_t9 import *
>>> lasso00001 = Lasso(alpha=0.0001).fit(X_train, y_train)
>>> print("training set score: %f" % lasso00001.score(X_train, y_train))
training set score: 0.937435
>>> print("test set score: %f" % lasso00001.score(X_test, y_test))
test set score: 0.781569
>>> print("number of features used: %d" % np.sum(lasso00001.coef_ != 0))
number of features used: 101

Again, we can plot the coefficients of the different models, similarly to Figure ridge_coefficients. 
- ch2_t10.py 
  1. import mglearn  
  2.   
  3. from sklearn.model_selection import train_test_split  
  4. from sklearn.linear_model import LinearRegression  
  5. from sklearn.linear_model import Lasso  
  6. from sklearn.linear_model import Ridge  
  7.   
  8. X, y = mglearn.datasets.load_extended_boston()  
  9. X_train, X_test, y_train, y_test = train_test_split(X, y, random_state=42)  
  10. lr = LinearRegression().fit(X_train, y_train)  
  11. ridge01 = Ridge(alpha=0.1).fit(X_train, y_train)  
  12. lasso = Lasso().fit(X_train, y_train)  
  13. lasso001 = Lasso(alpha=0.01).fit(X_train, y_train)  
  14. lasso00001 = Lasso(alpha=0.0001).fit(X_train, y_train)  
  15.   
  16.   
  17. import os  
  18. dmode = os.environ.get('DISPLAY''')  
  19.   
  20. if dmode:  
  21.     import matplotlib.pyplot as plt  
  22.     import numpy as np  
  23.     plt.title("lasso_coefficients")  
  24.     plt.plot(lasso.coef_, 'o', label='Lasso alpha=1')  
  25.     plt.plot(lasso001.coef_, 'o', label='Lasso alpha=0.01')  
  26.     plt.plot(lasso00001.coef_, 'o', label='Lasso alpha=0.0001')  
  27.     plt.plot(ridge01.coef_, 'o', label='Ridge alpha=0.1')  
  28.     plt.ylim(-2525)  
  29.     plt.legend()  
  30.     plt.show()  
Figure lasso_coefficients 

In practice, Ridge regression is usually the first choice between these two models. However, if you have a large amount of features and expect only a few of them to be important, Lasso might be a better choice. Similarly, if you would like to have a model that is easy to interpret, Lasso will provide a model that is easier to understand, as it will select only a subset of the input features. 

Linear models for Classification 
Linear models are also extensively used for classification. Let’s look at binary classification first. In this case, a prediction is made using the following formula: 
y = w[0] x[0] + w[1] x[1] + ... + w[p] x[p] + b >= 0 as Class 1, otherwise as Class -1 ... Formula 2

The formula looks very similar to the one for linear regression, but instead of just returning the weighted sum of the features, we threshold the predicted value at zero. If the function was smaller than zero, we predict the class -1, if it was larger than zero, we predict the class +1. This prediction rule is common to all linear models for classification. Again, there are many different ways to find the coefficients w and the intercept b

For linear models for regression, the output y was a linear function of the features: a line, plane, or hyperplane (in higher dimensions). For linear models for classification, the decision boundary is a linear function of the input. In other words, a (binary) linear classifier is a classifier that separates two classes using a line, a plane or a hyperplane. We will see examples of that below. 

There are many algorithms for learning linear models. These algorithms all differ in the following two ways: 
1. How they measure how well a particular combination of coefficients and intercept fits the training data.
2. If and what kind of regularization they use.

Different algorithms choose different ways to measure what “fitting the training set well” means in 1. For technical mathematical reasons, it is not possible to adjust w and b to minimize the number of misclassifications the algorithms produce, as one might hope. For our purposes, and many applications, the different choices for 1. (called loss function) is of little significance. 

The two most common linear classification algorithms are logistic regression, implemented in linear_model.LogisticRegression and linear support vector machines (linear SVMs), implemented in svm.LinearSVC (SVC stands for Support Vector Classifier). Despite its name, LogisticRegression is a classification algorithm and not a regression algorithm, and should not be confused with LinearRegression

We can apply the LogisticRegression and LinearSVC models to the forge dataset, and visualize the decision boundary as found by the linear models: 
- ch2_t11.py 
  1. import mglearn  
  2. import numpy as np  
  3. from sklearn.neighbors import KNeighborsRegressor  
  4. from sklearn.model_selection import train_test_split  
  5. from sklearn.linear_model import LinearRegression  
  6. from sklearn.linear_model import LogisticRegression  
  7. from sklearn.svm import LinearSVC  
  8.   
  9. X, y = mglearn.datasets.make_forge()  
  10.   
  11. import os  
  12. dmode = os.environ.get('DISPLAY''')  
  13.   
  14. if dmode:  
  15.     import matplotlib.pyplot as plt  
  16.     import numpy as np  
  17.     fig, axes = plt.subplots(12, figsize=(103))  
  18.     plt.suptitle("linear_classifiers")  
  19.     for model, ax in zip([LinearSVC(), LogisticRegression()], axes):  
  20.         clf = model.fit(X, y)  
  21.         mglearn.plots.plot_2d_separator(clf, X, fill=False, eps=0.5, ax=ax, alpha=.7)  
  22.         ax.scatter(X[:,0], X[:,1], c=y, s=60, cmap=mglearn.cm2)  
  23.         ax.set_title("%s" % clf.__class__.__name__)  
  24.     plt.show()  

In this figure, we have the first feature of the forge dataset on the x axis and the second feature on the y axis as before. We display the decision boundaries found by LinearSVC and LogisticRegression respectively as straight lines, separating the area classified as blue on the bottom from the area classified as red on the top. In other words, any new data point that lies above the black line will be classified as red by the respective classifier, while any point that lies below the black line will be classified as blue. 

The two models come up with similar decision boundaries. Note that both misclassify two of the points. By default, both models apply an l2 regularization, in the same way that Ridge does for regression. 

For LogisticRegression and LinearSVC the trade-off parameter that determines the strength of the regularization is called C, and higher values of C correspond to less regularization. In other words, when using a high value of the parameter CLogisticRegression and LinearSVC try to fit the training set as best as possible, while with low values of the parameter C, the model put more emphasis on finding a coefficient vector w that is close to zero

There is another interesting intuition of how the parameter C acts. Using low values of C will cause the algorithms try to adjust to the “majority” of data points, while using a higher value of C stresses the importance that each individual data point be classified correctly. Here is an illustration using LinearSVC
  1. mglearn.plots.plot_linear_svc_regularization()  


On the left hand side, we have a very small C corresponding to a lot of regularization. Most of the blue points are at the top, and most of the red points are at the bottom. The strongly regularized model chooses a relatively horizontal line, misclassifying two points; In the center plot, C is slightly higher, and the model focuses more on the two misclassified samples, tilting the decision boundary. Finally, on the right hand side, a very high value of C in the model tilts the decision boundary a lot, now correctly classifying all red points. One of the blue points is still misclassified, as it is not possible to correctly classify all points in this dataset using a straight line. The model illustrated on the right hand side tries hard to correctly classify all points, but might not capture the overall layout of the classes well. In other words, this model is likely overfitting. 

Similarly to the case of regression, linear models for classification might seem very restrictive in low dimensional spaces, only allowing for decision boundaries which are straight lines or planes. Again, in high dimensions, linear models for classification become very powerful, and guarding against overfitting becomes increasingly important when considering more features

Let’s analyze LogisticRegression in more detail on the breast_cancer dataset: 
- ch2_t12.py 
  1. import mglearn  
  2. import numpy as np  
  3. from sklearn.model_selection import train_test_split  
  4. from sklearn.linear_model import LogisticRegression  
  5. from sklearn.datasets import load_breast_cancer  
  6.   
  7. cancer = load_breast_cancer()  
  8. X_train, X_test, y_train, y_test = train_test_split(cancer.data, cancer.target, stratify=cancer.target, random_state=42)  
  9. logisticregression = LogisticRegression().fit(X_train, y_train)  
  10.   
  11. print("training set score: %f" % logisticregression.score(X_train, y_train))  
  12. print("test set score: %f" % logisticregression.score(X_test, y_test))  
Execution output: 
training set score: 0.955399
test set score: 0.958042

The default value of C=1 provides quite good performance, with 95% accuracy on both the training and the test set. As training and test set performance are very close, it is likely that we are underfitting. Let’s try to increase C to fit a more flexible model. 
>>> from ch2_t12 import *
>>> logisticregression100 = LogisticRegression(C=100).fit(X_train, y_train)
>>> print("training set score: %f" % logisticregression100.score(X_train, y_train))
training set score: 0.976526
>>> print("test set score: %f" % logisticregression100.score(X_test, y_test))
test set score: 0.958042

Using C=100 results in higher training set accuracy, and also a slightly increased test set accuracy, confirming our intuition that a more complex model should perform better. We can also investigate what happens if we use an even more regularized model than the default of C=1, by setting C=0.01: 
>>> logisticregression001 = LogisticRegression(C=0.01).fit(X_train, y_train)
>>> print("training set score: %f" % logisticregression001.score(X_train, y_train))
training set score: 0.924883
>>> print("test set score: %f" % logisticregression001.score(X_test, y_test))
test set score: 0.958042

As expected, when moving more to the left in Figure model_complexity from an already underfit model, both training and test set accuracy decrease relative to the default parameters. Finally, lets look at the coefficients learned by the models with the three different settings of the regularization parameter C
- ch2_t13.py 
  1. import mglearn  
  2. import numpy as np  
  3. from sklearn.model_selection import train_test_split  
  4. from sklearn.linear_model import LogisticRegression  
  5. from sklearn.datasets import load_breast_cancer  
  6.   
  7. cancer = load_breast_cancer()  
  8. X_train, X_test, y_train, y_test = train_test_split(cancer.data, cancer.target, random_state=42)  
  9. logisticregression = LogisticRegression().fit(X_train, y_train)  
  10.   
  11.   
  12. import os  
  13. dmode = os.environ.get('DISPLAY''')  
  14.   
  15. if dmode:  
  16.     import matplotlib.pyplot as plt  
  17.     import numpy as np  
  18.     logisticregression001 = LogisticRegression(C=0.01).fit(X_train, y_train)  
  19.     logisticregression100 = LogisticRegression(C=100).fit(X_train, y_train)  
  20.     plt.plot(logisticregression.coef_.T, 'o', label="C=1")  
  21.     plt.plot(logisticregression100.coef_.T, 'o', label='C=100')  
  22.     plt.plot(logisticregression001.coef_.T, 'o', label='C=0.001')  
  23.     plt.xticks(range(cancer.data.shape[1]), cancer.feature_names, rotation=90)  
  24.     plt.ylim(-55)  
  25.     plt.legend()  
  26.     plt.show()  

As LogisticRegression applies an L2 regularization by default, the result looks similar to Ridge in Figure ridge_coefficients. Stronger regularization pushes coefficients more and more towards zero, though coefficients never become exactly zero. Inspecting the plot more closely, we can also see an interesting effect in the third coefficient, for “mean perimeter”. For C=100 and C=1, the coefficient is negative, while for C=0.001, the coefficient is positive, with a magnitude that is even larger as for C=1. Interpreting a model like this, one might think the coefficient tells us which class a feature is associated with. For example, one might think that a high “texture error” feature is related to a sample being “malignant”. However, the change of sign in the coefficient for “mean perimeter” means that depending on which model we look at, high “mean perimeter” could be either taken as being indicative of “benign” or indicative of “malignant”. This illustrates that interpretations of coefficients of linear models should always be taken with a grain of salt. 

If we desire a more interpretable model, using L1 regularization might help, as it limits the model to only using a few features. Here is the coefficient plot and classification accuracies for L1 regularization: 
- ch2_t14.py 
  1. import mglearn  
  2. import numpy as np  
  3. from sklearn.model_selection import train_test_split  
  4. from sklearn.linear_model import LogisticRegression  
  5. from sklearn.datasets import load_breast_cancer  
  6.   
  7. cancer = load_breast_cancer()  
  8. X_train, X_test, y_train, y_test = train_test_split(cancer.data, cancer.target, random_state=42)  
  9. logisticregression = LogisticRegression().fit(X_train, y_train)  
  10.   
  11.   
  12. import os  
  13. dmode = os.environ.get('DISPLAY''')  
  14.   
  15. if dmode:  
  16.     import matplotlib.pyplot as plt  
  17.     import numpy as np  
  18.     for C in [0.0011100]:  
  19.         lr_l1 = LogisticRegression(C=C, penalty="l1").fit(X_train, y_train)  
  20.         print("training accuracy of L1 logreg with C=%f: %f" % (C, lr_l1.score(X_train, y_train)))  
  21.         print("test accuracy of L1 logreg with C=%f: %f" % (C, lr_l1.score(X_test, y_test)))  
  22.         plt.plot(lr_l1.coef_.T, 'o', label="C=%f" % C)  
  23.     plt.xticks(range(cancer.data.shape[1]), cancer.feature_names, rotation=90)  
  24.     plt.ylim(-55)  
  25.     plt.legend(loc=2)  
  26.     plt.show()  

Linear Models for multiclass classification 
Many linear classification models are binary models, and don’t extend naturally to the multi-class case (with the exception of Logistic regression). A common technique to extend a binary classification algorithm to a multi-class classification algorithm is the one-vs-rest approach. In the one-vs-rest approach, a binary model is learned for each class, which tries to separate this class from all of the other classes, resulting in as many binary models as there are classes

To make a prediction, all binary classifiers are run on a test point. The classifier that has the highest score on its single class “wins” and this class label is returned as prediction. Having one binary classifier per class results in having one vector of coefficients w and one intercept b for each class. The mathematics behind logistic regression are somewhat different from the one-vsrest approach, but they also result in one coefficient vector and intercept per class, and the same method of making a prediction is applied. 

Let’s apply the one-vs-rest method to a simple three-class classification dataset. We use a two-dimensional dataset, where each class is given by data sampled from a Gaussian distribution. 
  1. from sklearn.datasets import make_blobs  
  2.   
  3. X, y = make_blobs(random_state=42)  
  4. import matplotlib.pyplot as plt  
  5. import numpy as np  
  6. plt.scatter(X[:,0], X[:,1], c=y, s=60, cmap=mglearn.cm3)  
  7. plt.show()  

Now, we train a LinearSVC classifier on the dataset. 
  1. from sklearn.svm import LinearSVC  
  2. linear_svm = LinearSVC().fit(X,y)  
  3. print "Shape of LinearSVM coef: (%d, %d)" % linear_svm.coef_.shape  
  4. print "Shape of LinearSVM intercept: (%d, )" % linear_svm.intercept_.shape  
The execution output: 
Shape of LinearSVM coef: (3, 2)
Shape of LinearSVM intercept: (3, )

We see that the shape of the coef_ is (3, 2), meaning that each row of coef_ contains the coefficient vector for one of the three classes. Each row has two entries, corresponding to the two features in the dataset; The intercept_ is now a one-dimensional array, storing the intercepts for each class. Let’s visualize the lines given by the three binary classifiers: 
- ch2_t16.py 
  1. import mglearn  
  2. import numpy as np  
  3. from sklearn.model_selection import train_test_split  
  4. from sklearn.linear_model import LogisticRegression  
  5. from sklearn.datasets import make_blobs  
  6.   
  7. X, y = make_blobs(random_state=42)  
  8. #X_train, X_test, y_train, y_test = train_test_split(cancer.data, cancer.target, random_state=42)  
  9.   
  10. from sklearn.svm import LinearSVC  
  11. linear_svm = LinearSVC().fit(X,y)  
  12. print "Shape of LinearSVM coef: (%d, %d)" % linear_svm.coef_.shape  
  13. print "Shape of LinearSVM intercept: (%d, )" % linear_svm.intercept_.shape  
  14.   
  15. import os  
  16. dmode = os.environ.get('DISPLAY''')  
  17.   
  18. if dmode:  
  19.     import matplotlib.pyplot as plt  
  20.     import numpy as np  
  21.     plt.scatter(X[:,0], X[:,1], c=y, s=60, cmap=mglearn.cm3)  
  22.     line = np.linspace(-1515)  
  23.     for coef, intercept in zip(linear_svm.coef_, linear_svm.intercept_):  
  24.         # w0 * x + b + w1 * y = 0  
  25.         # y = -( x * w0 + b) / w1  
  26.         plt.plot(line, -(line * coef[0] + intercept) / coef[1])  
  27.     plt.ylim(-1015)  
  28.     plt.xlim(-108)  
  29.     plt.show()  

The red line shows the decision boundary for the binary classifier for the red class, and so on. You can see that all the red points in the training data are under the red line, which means they are on the “red” side of this binary classifier. The red points are left of the green line, which means they are classified as “rest” by the binary classifier for the green class. The red points are below the blue line, which means the binary classifier for the blue class also classifies them as “rest”. Therefore, any point in this area will be classified as red by the final classifier (Formula (3) of the red classifier is greater than zero, while it is smaller than zero for the other two classes). 

But what about the triangle in the middle of the plot? All three binary classifiers classify points there as “rest”. Which class would a point there be assigned to? The answer is the one with the highest value in Formula (3): the class of the closest line. The following figure shows the prediction shown for all regions of the 2d space: 
- ch2_t17.py 
  1. import mglearn  
  2. import numpy as np  
  3. from sklearn.model_selection import train_test_split  
  4. from sklearn.linear_model import LogisticRegression  
  5. from sklearn.datasets import make_blobs  
  6.   
  7. X, y = make_blobs(random_state=42)  
  8. #X_train, X_test, y_train, y_test = train_test_split(cancer.data, cancer.target, random_state=42)  
  9.   
  10. from sklearn.svm import LinearSVC  
  11. linear_svm = LinearSVC().fit(X,y)  
  12. print "Shape of LinearSVM coef: (%d, %d)" % linear_svm.coef_.shape  
  13. print "Shape of LinearSVM intercept: (%d, )" % linear_svm.intercept_.shape  
  14.   
  15. import os  
  16. dmode = os.environ.get('DISPLAY''')  
  17.   
  18. if dmode:  
  19.     import matplotlib.pyplot as plt  
  20.     import numpy as np  
  21.     mglearn.plots.plot_2d_classification(linear_svm, X, fill=True, alpha=.7)  
  22.     plt.scatter(X[:,0], X[:,1], c=y, s=60)  
  23.     line = np.linspace(-1515)  
  24.     for coef, intercept in zip(linear_svm.coef_, linear_svm.intercept_):  
  25.         # w0 * x + b + w1 * y = 0  
  26.         # y = -( x * w0 + b) / w1  
  27.         plt.plot(line, -(line * coef[0] + intercept) / coef[1])  
  28.     plt.ylim(-1015)  
  29.     plt.xlim(-108)  
  30.     plt.show()  

Strengths, weaknesses and parameters 
The main parameter of linear models is the regularization parameter, called alpha in the regression models and C in LinearSVC and LogisticRegression. Large alpha or small C mean simple models. In particular for the regression models, tuning this parameter is quite important. Usually C and alpha are searched for on a logarithmic scale; The other decision you have to make is whether you want to use L1 regularization or L2 regularization. If you assume that only few of your features are actually important, you should use L1. Otherwise, you should default to L2. 

L1 can also be useful if interpretability of the model is important. As L1 will use only a few features, it is easier to explain which features are important to the model, and what the effect of these features is. 

Linear models are very fast to train, and also fast to predict. They scale to very large datasets and work well with sparse data. If your data consists of hundreds of thousands or millions of samples, you might want to investigate SGDClassifier and SGDRegressor, which implement even more scalable versions of the linear models described above. 

Another strength of linear models is that they make us relatively easy to understand how a prediction is made, using Formula (1) for regression and Formula (2) for classification. Unfortunately, it is often not entirely clear why coefficients are the way they are. This is particularly true if your dataset has highly correlated features; in these cases, the coefficients might be hard to interpret. 

Linear models often perform well when the number of features is large compared to the number of samples. They are also often used on very large datasets, simply because other models are not feasible to train. However, on smaller dataset, other models might yield better generalization performance. 

Supplement : 
[ ML In Action ] Predicting numeric values : regression - Linear regression (1) 
[ ML In Action ] Predicting numeric values : regression - Linear regression (2) 
[ ML In Action ] Predicting numeric values : regression - Linear regression (3) 
[ ML Foundation ] Section2 : Learning to Answer Yes/No - PLA (Part1) 
[ ML Foundation ] Section2 : Learning to Answer Yes/No - PLA (Part2)

沒有留言:

張貼留言

[Git 常見問題] error: The following untracked working tree files would be overwritten by merge

  Source From  Here 方案1: // x -----删除忽略文件已经对 git 来说不识别的文件 // d -----删除未被添加到 git 的路径中的文件 // f -----强制运行 #   git clean -d -fx 方案2: 今天在服务器上  gi...