2021年3月27日 星期六

[Linux 常見問題] Bash - How to check if a string contains a substring in Bash

 Source From Here

Question
I have a string in Bash:
  1. string="My string"  
How can I test if it contains another string?

HowTo
You can use Marcus's answer (* wildcards) outside a case statement, too, if you use double brackets:
  1. string='My long string'  
  2. if [[ $string == *"My long"* ]]; then  
  3.   echo "It's there!"  
  4. fi  
Note that spaces in the needle string need to be placed between double quotes, and the * wildcards should be outside. Also note that a simple comparison operator is used (i.e. ==), not the regex operator =~.

[Linux 常見問題] Bash - How to process each output line in a loop?

 Source From Here

Question
I have a number of lines retrieved from a file after running the grep command as follows:
  1. var=`grep xyz abc.txt`  
Let’s say I got 10 lines which consists of xyz as a result.

Now I need to process each line I got as a result of the grep command. How do I proceed with this?

HowTo
One of the easy ways is not to store the output in a variable, but directly iterate over it with a while/read loop. Something like:
test.sh
  1. #!/bin/sh  
  2. grep xyz abc.txt | while read -r line ; do  
  3.     echo "Processing $line"  
  4.     # your code goes here  
  5. done  
There are variations on this scheme depending on exactly what you're after. A sample usage:
root@localhost:demo_bash# cat abc.txt
xyz 123
test line1
456 xyz
test line2


root@localhost:demo_bash# ./test.sh
Processing xyz 123
Processing 456 xyz


2021年3月20日 星期六

[ ML 文章收集 ] Evaluate the Performance Of Deep Learning Models in Keras

 Preface

(article sourceKeras is an easy to use and powerful Python library for deep learning.

There are a lot of decisions to make when designing and configuring your deep learning models. Most of these decisions must be resolved empirically through trial and error and evaluating them on real data.

As such, it is critically important to have a robust way to evaluate the performance of your neural networks and deep learning models. In this post you will discover a few ways that you can use to evaluate model performance using Keras.

For below sample code to work properly, we have to import necessary packages firstly:
  1. import numpy as np  
  2. import pandas as pd  
  3. from keras.models import Sequential  
  4. from keras.layers import Dense  
  5. from sklearn.model_selection import train_test_split  
  6. from sklearn.model_selection import StratifiedKFold  
  7. from matplotlib import pyplot as plt  
  8.   
  9. # fix random seed for reproducibility  
  10. seed = 7  
  11.   
  12. # fix random seed for reproducibility  
  13. np.random.seed(seed)  
Data Set
All examples in this post use the Pima Indians onset of diabetes dataset. You can download it from the UCI Machine Learning Repository and save the data file in your current working directory with the filename pima-indians-diabetes.csv (update: download from here).
  1. pima_df = pd.read_csv("../../datas/kaggle_pima-indians-diabetes-database/diabetes.csv")  
  2. pima_df.head()  


Then we can split the raw dataset into features and target labels:
  1. # split into input (X) and output (Y) variables  
  2. X = pima_df.iloc[:,:-1].values  
  3. y = pima_df.iloc[:,-1].values  
Empirically Evaluate Network Configurations
There are a myriad of decisions you must make when designing and configuring your deep learning models.

Many of these decisions can be resolved by copying the structure of other people’s networks and using heuristics. Ultimately, the best technique is to actually design small experiments and empirically evaluate options using real data.

This includes high-level decisions like the number, size and type of layers in your network. It also includes the lower level decisions like the choice of loss function, activation functions, optimization procedure and number of epochs.

Deep learning is often used on problems that have very large datasets. That is tens of thousands or hundreds of thousands of instances.

As such, you need to have a robust test harness that allows you to estimate the performance of a given configuration on unseen data, and reliably compare the performance to other configurations.

Data Splitting
The large amount of data and the complexity of the models require very long training times.

As such, it is typically to use a simple separation of data into training and test datasets or training and validation datasets. Keras provides a two convenient ways of evaluating your deep learning algorithms this way:
* Use an automatic verification dataset.
* Use a manual verification dataset.

Use a Automatic Verification Dataset
Keras can separate a portion of your training data into a validation dataset and evaluate the performance of your model on that validation dataset each epoch.

You can do this by setting the validation_split argument on the fit() function to a percentage of the size of your training dataset. For example, a reasonable value might be 0.2 or 0.33 for 20% or 33% of your training data held back for validation.

The example below demonstrates the use of using an automatic validation dataset on a small binary classification problem:
  1. def get_model():  
  2.     model = Sequential()  
  3.     model.add(Dense(12, input_dim=8, activation='relu'))  
  4.     model.add(Dense(8, activation='relu'))  
  5.     model.add(Dense(1, activation='sigmoid'))  
  6.     # Compile model  
  7.     model.compile(loss='binary_crossentropy', optimizer='adam', metrics=['accuracy'])  
  8.       
  9.     return model  
  10.   
  11. # create model  
  12. model = get_model()  
  13.   
  14. # Fit the model  
  15. history = model.fit(X, y, validation_split=0.33, epochs=150, batch_size=10)  
Running the example, you can see that the verbose output on each epoch shows the loss and accuracy on both the training dataset and the validation dataset:
  1. ...  
  2. 52/52 [==============================] - 0s 942us/step - loss: 0.5290 - accuracy: 0.7073 - val_loss: 0.5371 - val_accuracy: 0.7559  
  3. Epoch 150/150  
  4. 52/52 [==============================] - 0s 1ms/step - loss: 0.5097 - accuracy: 0.7412 - val_loss: 0.5336 - val_accuracy: 0.7638  


Use a Manual Verification Dataset
Keras also allows you to manually specify the dataset to use for validation during training.

In this example we use the handy train_test_split() function from the Python scikit-learn machine learning library to separate our data into a training and test dataset. We use 67% for training and the remaining 33% of the data for validation.

The validation dataset can be specified to the fit() function in Keras by the validation_data argument. It takes a tuple of the input and output datasets:
  1. # split into 67for train and 33for test  
  2. X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.33, random_state=seed)  
  3.   
  4. # create model  
  5. model = get_model()  
  6.   
  7. # Fit the model  
  8. history = model.fit(X_train, y_train, validation_data=(X_test,y_test), epochs=150, batch_size=10)  
Output:
  1. ...  
  2. 52/52 [==============================] - 0s 978us/step - loss: 0.4699 - accuracy: 0.7632 - val_loss: 0.5641 - val_accuracy: 0.7244  
  3. Epoch 150/150  
  4. 52/52 [==============================] - 0s 978us/step - loss: 0.4971 - accuracy: 0.7594 - val_loss: 0.5943 - val_accuracy: 0.7165  
Manual k-Fold Cross Validation
The gold standard for machine learning model evaluation is - k-fold cross validation.

It provides a robust estimate of the performance of a model on unseen data. It does this by splitting the training dataset into K subsets and takes turns training models on all subsets except one which is held out, and evaluating model performance on the held out validation dataset. The process is repeated until all subsets are given an opportunity to be the held out validation set. The performance measure is then averaged across all models that are created.

Cross validation is often not used for evaluating deep learning models because of the greater computational expenseFor example k-fold cross validation is often used with 5 or 10 folds. As such, 5 or 10 models must be constructed and evaluated, greatly adding to the evaluation time of a model.

Nevertheless, it when the problem is small enough or if you have sufficient compute resources, k-fold cross validation can give you a less biased estimate of the performance of your model.

In the example below we use the handy StratifiedKFold class from the scikit-learn Python machine learning library to split up the training dataset into 10 folds. The folds are stratified, meaning that the algorithm attempts to balance the number of instances of each class in each fold.

The example creates and evaluates 10 models using the 10 splits of the data and collects all of the scores. The verbose output for each epoch is turned off by passing verbose=0 to the fit() and evaluate() functions on the model.

The performance is printed for each model and it is stored. The average and standard deviation of the model performance is then printed at the end of the run to provide a robust estimate of model accuracy:
  1. # define 10-fold cross validation test harness  
  2. kfold = StratifiedKFold(n_splits=10, shuffle=True, random_state=seed)  
  3. cvscores = []  
  4.   
  5. for train, test in kfold.split(X, y):  
  6.     # create model  
  7.     model = get_model()  
  8.           
  9.     # Fit the model  
  10.     model.fit(X[train], y[train], epochs=150, batch_size=10, verbose=0)  
  11.   
  12.     # evaluate the model  
  13.     scores = model.evaluate(X[test], y[test], verbose=0)  
  14.     print("%s: %.2f%%" % (model.metrics_names[1], scores[1]*100))  
  15.     cvscores.append(scores[1] * 100)  
  16.       
  17. print("%.2f%% (+/- %.2f%%)" % (np.mean(cvscores), np.std(cvscores)))  
Running the example will take less than a minute and will produce the below output:
  1. ...  
  2. accuracy: 76.32%  
  3. accuracy: 81.58%  
  4. 74.62% (+/- 3.62%)  
Train/Validation Performance
If you are interested how train/validation accuracy history during training, you can draw a chart of them as below:
  1. # create model  
  2. model = get_model()  
  3. ​  
  4. # Fit the model  
  5. history = model.fit(X, y, validation_split=0.33, epochs=150, batch_size=10, verbose=0)  
  6.   
  7. history.history.keys()  
Output:
dict_keys(['loss', 'accuracy', 'val_loss', 'val_accuracy'])

Let's draw the train/valiation accuracy line chart along with growing epoch:
  1. plt.plot(history.history['accuracy'])  
  2. plt.plot(history.history['val_accuracy'])  
  3. plt.title('model accuracy')  
  4. plt.ylabel('accuracy')  
  5. plt.xlabel('epoch')  
  6. plt.legend(['train''val'], loc='upper left')  
  7. plt.show()  


Then let's draw the shrinking error chart according to epoch:
  1. plt.plot(history.history['loss'])  
  2. plt.plot(history.history['val_loss'])  
  3. plt.title('model loss')  
  4. plt.ylabel('loss')  
  5. plt.xlabel('epoch')  
  6. plt.legend(['train''val'], loc='upper left')  
  7. plt.show()  


notebook link of this article.

Supplement
Kaggle - Pima Indians Diabetes EDA






[Git 常見問題] error: The following untracked working tree files would be overwritten by merge

  Source From  Here 方案1: // x -----删除忽略文件已经对 git 来说不识别的文件 // d -----删除未被添加到 git 的路径中的文件 // f -----强制运行 #   git clean -d -fx 方案2: 今天在服务器上  gi...