2018年1月14日 星期日

[ ML 文章收集 ] fastText - A library for efficient learning of word representations and sentence classif

Source From Here 
Get started 

What is fastText? 
fastText is a library for efficient learning of word representations and sentence classification. 

Requirements 
astText builds on modern Mac OS and Linux distributions. Since it uses C++11 features, it requires a compiler with good C++11 support. These include : 
* (gcc-4.6.3 or newer) or (clang-3.3 or newer)

Compilation is carried out using a Makefile, so you will need to have a working make. For the word-similarity evaluation script you will need: 
* python 2.6 or newer
* numpy & scipy

Building fastText 
In order to build fastText, use the following: 
# git clone https://github.com/facebookresearch/fastText.git
# cd fastText
# make

This will produce object files for all the classes as well as the main binary fasttext. If you do not plan on using the default system-wide compiler, update the two macros defined at the beginning of the Makefile (CC and INCLUDES). 

Cheatsheet 

Word representation learning 
In order to learn word vectors do: 
# ./fasttext skipgram -input data.txt -output model


Obtaining word vectors 
Print word vectors for a text file queries.txt containing words. 
# ./fasttext print-word-vectors model.bin < queries.txt


Text classification 
In order to train a text classifier do: 
# ./fasttext supervised -input train.txt -output model

Once the model was trained, you can evaluate it by computing the precision and recall at k (P@k and R@k) on a test set using: 
# ./fasttext test model.bin test.txt 1


In order to obtain the k most likely labels for a piece of text, use: 
# ./fasttext predict model.bin test.txt k


In order to obtain the k most likely labels and their associated probabilities for a piece of text, use: 
# ./fasttext predict-prob model.bin test.txt k


If you want to compute vector representations of sentences or paragraphs, please use: 
# ./fasttext print-sentence-vectors model.bin < text.txt

Quantization 
In order to create a .ftz file with a smaller memory footprint do: 
# ./fasttext quantize -output model


All other commands such as test also work with this model 
# ./fasttext test model.ftz test.txt


Text classification 
Text classification is a core problem to many applications, like spam detection, sentiment analysis or smart replies. In this tutorial, we describe how to build a text classifier with the fastText tool

What is text classification? 
The goal of text classification is to assign documents (such as emails, posts, text messages, product reviews, etc...to one or multiple categories. Such categories can be review scores, spam v.s. non-spam, or the language in which the document was typed. Nowadays, the dominant approach to build such classifiers is machine learning, that is learning classification rules from examples. In order to build such classifiers, we need labeled data, which consists of documents and their corresponding categories (or tags, or labels). 

As an example, we build a classifier which automatically classifies stackexchange questions about cooking into one of several possible tags, such as pot, bowl or baking

Getting and preparing the data 
As mentioned in the introduction, we need labeled data to train our supervised classifier. In this tutorial, we are interested in building a classifier to automatically recognize the topic of a stackexchange question about cooking. Let's download examples of questions from the cooking section of Stackexchange, and their associated tags: 
# wget https://s3-us-west-1.amazonaws.com/fasttext-vectors/cooking.stackexchange.tar.gz && tar xvzf cooking.stackexchange.tar.gz
# head cooking.stackexchange.txt
__label__sauce __label__cheese How much does potato starch affect a cheese sauce recipe?
__label__food-safety __label__acidity Dangerous pathogens capable of growing in acidic environments
__label__cast-iron __label__stove How do I cover up the white spots on my cast iron stove?
__label__restaurant Michelin Three Star Restaurant; but if the chef is not there
__label__knife-skills __label__dicing Without knife skills, how can I quickly and accurately dice vegetables?
__label__storage-method __label__equipment __label__bread What's the purpose of a bread box?
__label__baking __label__food-safety __label__substitutions __label__peanuts how to seperate peanut oil from roasted peanuts at home?
__label__chocolate American equivalent for British chocolate terms
__label__baking __label__oven __label__convection Fan bake vs bake
__label__sauce __label__storage-lifetime __label__acidity __label__mayonnaise Regulation and balancing of readymade packed mayonnaise and other sauces

Each line of the text file contains a list of labels, followed by the corresponding document. All the labels start by the __label__ prefix, which is how fastText recognize what is a label or what is a word. The model is then trained to predict the labels given the word in the document. Before training our first classifier, we need to split the data into train and validation. We will use the validation set to evaluate how good the learned classifier is on new data. 
# wc cooking.stackexchange.txt
15404 169582 1401900 cooking.stackexchange.txt

Our full dataset contains 15404 examples. Let's split it into a training set of 12404 examples and a validation set of 3000 examples: 
# head -n 12404 cooking.stackexchange.txt > cooking.train
# tail -n 3000 cooking.stackexchange.txt > cooking.valid

Our first classifier 
We are now ready to train our first classifier: 
# ./fasttext supervised -input cooking.train -output model_cooking
Read 0M words
Number of words: 14598
Number of labels: 734
Progress: 100.0% words/sec/thread: 75109 lr: 0.000000 loss: 5.708354 eta: 0h0m

The -input command line option indicates the file containing the training examples, while the -output option indicates where to save the model. At the end of training, a file model_cooking.bin, containing the trained classifier, is created in the current directory. It is possible to directly test our classifier interactively, by running the command: 
# ./fasttext predict model_cooking.bin -

and then typing a sentence. Let's first try the sentence: 
  1. Which baking dish is best to bake a banana bread ?  
The predicted tag is baking which fits well to this question. Let us now try a second example: 
  1. Why not put knives in the dishwasher?  
The label predicted by the model is food-safety, which is not relevant. Somehow, the model seems to fail on simple examples. To get a better sense of its quality, let's test it on the validation data by running: 
# ./fasttext test model_cooking.bin cooking.valid
N 3000
P@1 0.124
R@1 0.0541
Number of examples: 3000

The output of fastText are the precision at one (P@1) and the recall at one (R@1). We can also compute the precision at five and recall at five with: 
# ./fasttext test model_cooking.bin cooking.valid 5
N 3000
P@5 0.0668
R@5 0.146
Number of examples: 3000

Advanced readers: precision and recall 
The precision is the number of correct labels among the labels predicted by fastText. The recall is the number of labels that successfully were predicted, among all the real labels. Let's take an example to make this more clear: 
  1. Why not put knives in the dishwasher?  
On Stack Exchange, this sentence is labeled with three tags: equipmentcleaning and knives. The top five labels predicted by the model can be obtained with: 
# ./fasttext predict model_cooking.bin - 5

are food-safetybakingequipmentsubstitutions and bread

Thus, one out of five labels predicted by the model is correct, giving a precision of 0.20. Out of the three real labels, only one is predicted by the model, giving a recall of 0.33. For more details, see the related Wikipedia page

Making the model better 
The model obtained by running fastText with the default arguments is pretty bad at classifying new questions. Let's try to improve the performance, by changing the default parameters. 

preprocessing the data 
Looking at the data, we observe that some words contain uppercase letter or punctuation. One of the first step to improve the performance of our model is to apply some simple pre-processing. A crude normalization can be obtained using command line tools such as sed and tr
# cat cooking.stackexchange.txt | sed -e "s/\([.\!?,'/()]\)/ \1 /g" | tr "[:upper:]" "[:lower:]" > cooking.preprocessed.txt
# head -n 12404 cooking.preprocessed.txt > cooking.train
# tail -n 3000 cooking.preprocessed.txt > cooking.valid

Let's train a new model on the pre-processed data: 
# ./fasttext supervised -input cooking.train -output model_cooking
Read 0M words
Number of words: 8952
Number of labels: 735
Progress: 100.0% words/sec/thread: 65087 lr: 0.000000 loss: 10.264122 ETA: 0h 0m

# ./fasttext test model_cooking.bin cooking.valid
N 3000
P@1 0.169
R@1 0.0731
Number of examples: 3000

We observe that thanks to the pre-processing, the vocabulary is smaller (from 14k words to 9k). The precision is also starting to go up by 4%! 

more epochs and larger learning rate 
By default, fastText sees each training example only five times during training, which is pretty small, given that our training set only have 12k training examples. The number of times each examples is seen (also known as the number of epochs), can be increased using the -epoch option: 
# ./fasttext supervised -input cooking.train -output model_cooking -epoch 25
Read 0M words
Number of words: 8952
Number of labels: 735
Progress: 100.0% words/sec/thread: 65157 lr: 0.000000 loss: 7.375107 ETA: 0h 0m

# ./fasttext test model_cooking.bin cooking.valid
N 3000
P@1 0.516
R@1 0.223
Number of examples: 3000

This is much better! Another way to change the learning speed of our model is to increase (or decrease) the learning rate of the algorithm. This corresponds to how much the model changes after processing each example. A learning rate of 0 would means that the model does not change at all, and thus, does not learn anything. Good values of the learning rate are in the range 0.1 - 1.0
# ./fasttext supervised -input cooking.train -output model_cooking -lr 1.0
Read 0M words
Number of words: 8952
Number of labels: 735
Progress: 100.0% words/sec/thread: 64356 lr: 0.000000 loss: 6.790833 ETA: 0h 0m

# ./fasttext test model_cooking.bin cooking.valid
N 3000
P@1 0.578
R@1 0.25
Number of examples: 3000

Even better! Let's try both together: 
# ./fasttext supervised -input cooking.train -output model_cooking -lr 1.0 -epoch 25
Read 0M words
Number of words: 8952
Number of labels: 735
Progress: 100.0% words/sec/thread: 54597 lr: 0.000000 loss: 4.376368 ETA: 0h 0m

# ./fasttext test model_cooking.bin cooking.valid
N 3000
P@1 0.591
R@1 0.255
Number of examples: 3000

Let us now add a few more features to improve even further our performance! 

word n-grams 
Finally, we can improve the performance of a model by using word bigrams, instead of just unigrams. This is especially important for classification problems where word order is important, such as sentiment analysis: 
# ./fasttext supervised -input cooking.train -output model_cooking -lr 1.0 -epoch 25 -wordNgrams 2
Read 0M words
Number of words: 8952
Number of labels: 735
Progress: 100.0% words/sec/thread: 58951 lr: 0.000000 loss: 3.174943 ETA: 0h 0m

# ./fasttext test model_cooking.bin cooking.valid
N 3000
P@1 0.601
R@1 0.26
Number of examples: 3000

With a few steps, we were able to go from a precision at one of 12.4% to 59.9%. Important steps included: 
* preprocessing the data ;
* changing the number of epochs (using the option -epoch, standard range [5 - 50]) ;
* changing the learning rate (using the option -lr, standard range [0.1 - 1.0]) ;
* using word n-grams (using the option -wordNgrams, standard range [1 - 5]).


Advanced readers: What is a Bigram? 
A 'unigram' refers to a single undividing unit, or token, usually used as an input to a model. For example a unigram can a word or a letter depending on the model. In fastText, we work at the word level and thus unigrams are words. Similarly we denote by 'bigram' the concatenation of 2 consecutive tokens or words. Similarly we often talk about n-gram to refer to the concatenation any n consecutive tokens. For example, in the sentence, 'Last donut of the night', the unigrams are 'last', 'donut', 'of', 'the' and 'night'. The bigrams are: 'Last donut', 'donut of', 'of the' and 'the night'. 

Bigrams are particularly interesting because, for most sentences, you can reconstruct the order of the words just by looking at a bag of n-grams. Let us illustrate this by a simple exercise, given the following bigrams, try to reconstruct the original sentence: 'all out', 'I am', 'of bubblegum', 'out of' and 'am all'. It is common to refer to a word as a unigram. 

Scaling things up 
Since we are training our model on a few thousands of examples, the training only takes a few seconds. But training models on larger datasets, with more labels can start to be too slow. A potential solution to make the training faster is to use the hierarchical softmax, instead of the regular softmax [Add a quick explanation of the hierarchical softmax]. This can be done with the option -loss hs
# time ./fasttext supervised -input cooking.train -output model_cooking -lr 1.0 -epoch 25 -wordNgrams 2 -loss hs
Read 0M words
Number of words: 8952
Number of labels: 735
Progress: 100.0% words/sec/thread: 1255918 lr: 0.000000 loss: 2.763379 ETA: 0h 0m

real 0m42.428s
user 0m7.212s
sys 0m1.316s

Conclusion 
In this tutorial, we gave a brief overview of how to use fastText to train powerful text classifiers. We had a light overview of some of the most important options to tune. 

Word representations 
A popular idea in modern machine learning is to represent words by vectors. These vectors capture hidden information about a language, like word analogies or semantic. It is also used to improve performance of text classifiers. In this tutorial, we show how to build these word vectors with the fastText tool. 

Getting the data 
In order to compute word vectors, you need a large text corpus. Depending on the corpus, the word vectors will capture different information. In this tutorial, we focus on Wikipedia's articles but other sources could be considered, like news or Webcrawl (more examples here). To download a raw dump of Wikipedia, run the following command: 
# wget https://dumps.wikimedia.org/enwiki/latest/enwiki-latest-pages-articles.xml.bz2

Downloading the Wikipedia corpus takes some time. Instead, lets restrict our study to the first 1 billion bytes of English Wikipedia. They can be found on Matt Mahoney's website
# mkdir data
# wget -c http://mattmahoney.net/dc/enwik9.zip -P data
# unzip data/enwik9.zip -d data

A raw Wikipedia dump contains a lot of HTML / XML data. We pre-process it with the wikifil.pl script bundled with fastText (this script was originally developed by Matt Mahoney, and can be found on his website ) 
# perl wikifil.pl data/enwik9 > data/fil9

We can check the file by running the following command: 
# head -c 80 data/fil9
anarchism originated as a term of abuse first used against early working class 

The text is nicely pre-processed and can be used to learn our word vectors. 

Training word vectors 
Learning word vectors on this data can now be achieved with a single command: 
# mkdir result
# ./fasttext skipgram -input data/fil9 -output result/fil9

To decompose this command line: ./fastext calls the binary fastText executable with the 'skipgram' model (it can also be 'cbow'). We then specify the requires options '-input' for the location of the data and '-output' for the location where the word representations will be saved. While fastText is running, the progress and estimated time to completion is shown on your screen. Once the program finishes, there should be two files in the result directory: 
# ls -l result
-rw-r-r-- 1 bojanowski 1876110778 978480850 Dec 20 11:01 fil9.bin
-rw-r-r-- 1 bojanowski 1876110778 190004182 Dec 20 11:01 fil9.vec

The fil9.bin file is a binary file that stores the whole fastText model and can be subsequently loaded. The fil9.vec file is a text file that contains the word vectors, one per line for each word in the vocabulary: 
# head -n 4 result/fil9.vec
218316 100
the -0.10363 -0.063669 0.032436 -0.040798 0.53749 0.00097867 0.10083 0.24829 ...
of -0.0083724 0.0059414 -0.046618 -0.072735 0.83007 0.038895 -0.13634 0.60063 ...
one 0.32731 0.044409 -0.46484 0.14716 0.7431 0.24684 -0.11301 0.51721 0.73262 ...

The first line is a header containing the number of words and the dimensionality of the vectors. The subsequent lines are the word vectors for all words in the vocabulary, sorted by decreasing frequency. 

Advanced readers: skipgram versus cbow 
fastText provides two models for computing word representations: skipgram and cbow ('Continuous-Bag-Of-Words'). 

The skipgram model learns to predict a target word thanks to a nearby word. On the other hand, the cbow model predicts the target word according to its context. The context is represented as a bag of the words contained in a fixed size window around the target word. Let us illustrate this difference with an example: given the sentence 'Poets have been mysteriously silent on the subject of cheese' and the target word 'silent', a skipgram model tries to predict the target using a random close-by word, like 'subject' or 'mysteriously'. The cbow model takes all the words in a surrounding window, like {been, mysteriously, on, the}, and uses the sum of their vectors to predict the target. The figure below summarizes this difference with another example. 


To train a cbow model with fastText, you run the following command: 
# ./fasttext cbow -input data/fil9 -output result/fil9

In practice, we observe that skipgram models works better with subword information than cbow. 

Advanced readers: playing with the parameters 
So far, we run fastText with the default parameters, but depending on the data, these parameters may not be optimal. Let us give an introduction to some of the key parameters for word vectors. 

The most important parameters of the model are its dimension and the range of size for the subwords. The dimension (dim) controls the size of the vectors, the larger they are the more information they can capture but requires more data to be learned. But, if they are too large, they are harder and slower to train. By default, we use 100 dimensions, but any value in the 100-300 range is as popular. The subwords are all the substrings contained in a word between the minimum size (minn) and the maximal size (maxn). By default, we take all the subword between 3 and 6 characters, but other range could be more appropriate to different languages: 
# ./fasttext skipgram -input data/fil9 -output result/fil9 -minn 2 -maxn 5 -dim 300

Depending on the quantity of data you have, you may want to change the parameters of the training. The epoch parameter controls how many time will loop over your data. By default, we loop over the dataset 5 times. If you dataset is extremely massive, you may want to loop over it less often. Another important parameter is the learning rate -lrThe higher the learning rate is, the faster the model converge to a solution but at the risk of overfitting to the dataset. The default value is 0.05 which is a good compromise. If you want to play with it we suggest to stay in the range of [0.01, 1]

Finally , fastText is multi-threaded and uses 12 threads by default. If you have less CPU cores (say 4), you can easily set the number of threads using the thread flag: 
# ./fasttext skipgram -input data/fil9 -output result/fil9 -thread 4


Printing word vectors 
Searching and printing word vectors directly from the fil9.vec file is cumbersome. Fortunately, there is a print-word-vectors functionality in fastText. For examples, we can print the word vectors of words asparaguspidgey and yellow with the following command: 
# echo "asparagus pidgey yellow" | ./fasttext print-word-vectors result/fil9.bin
asparagus 0.46826 -0.20187 -0.29122 -0.17918 0.31289 -0.31679 0.17828 -0.04418 ...
pidgey -0.16065 -0.45867 0.10565 0.036952 -0.11482 0.030053 0.12115 0.39725 ...
yellow -0.39965 -0.41068 0.067086 -0.034611 0.15246 -0.12208 -0.040719 -0.30155 ...

A nice feature is that you can also query for words that did not appear in your data! Indeed words are represented by the sum of its substrings. As long as the unknown word is made of known substrings, there is a representation of it! As an example let's try with a misspelled word: 
# echo "enviroment" | ./fasttext print-word-vectors result/fil9.bin

You still get a word vector for it! But how good it is? Let s find out in the next sections! 

Nearest neighbor queries 
A simple way to check the quality of a word vector is to look at its nearest neighbors. This give an intuition of the type of semantic information the vectors are able to capture. This can be achieve with the nn functionality. For example, we can query the 10 nearest neighbors of a word by running the following command: 
# ./fasttext nn result/fil9.bin
Pre-computing word vectors... done.

Then we are prompted to type our query word, let us try asparagus : 
Query word? asparagus
beetroot 0.812384
tomato 0.806688
horseradish 0.805928
spinach 0.801483
licorice 0.791697
lingonberries 0.781507
asparagales 0.780756
lingonberry 0.778534
celery 0.774529
beets 0.773984

Nice! It seems that vegetable vectors are similar. Note that the nearest neighbor is the word asparagus itself, this means that this word appeared in the dataset. What about pokemons
Query word? pidgey
pidgeot 0.891801
pidgeotto 0.885109
pidge 0.884739
pidgeon 0.787351
pok 0.781068
pikachu 0.758688
charizard 0.749403
squirtle 0.742582
beedrill 0.741579
charmeleon 0.733625

Different evolution of the same Pokemon have close-by vectors! But what about our misspelled word, is its vector close to anything reasonable? Let s find out: 
Query word? enviroment
enviromental 0.907951
environ 0.87146
enviro 0.855381
environs 0.803349
environnement 0.772682
enviromission 0.761168
realclimate 0.716746
environment 0.702706
acclimatation 0.697196
ecotourism 0.697081

Thanks to the information contained within the word, the vector of our misspelled word matches to reasonable words! It is not perfect but the main information has been captured. 

Advanced reader: measure of similarity 
In order to find nearest neighbors, we need to compute a similarity score between words. Our words are represented by continuous word vectors and we can thus apply simple similarities to them. In particular we use the cosine of the angles between two vectors. This similarity is computed for all words in the vocabulary, and the 10 most similar words are shown. Of course, if the word appears in the vocabulary, it will appear on top, with a similarity of 1. 

Word analogies 
In a similar spirit, one can play around with word analogies. For example, we can see if our model can guess what is to France as what Berlin is to Germany. This can be done with the analogies functionality. It takes a word triplet (like Germany Berlin France) and outputs the analogy: 
# ./fasttext analogies result/fil9.bin
Pre-computing word vectors... done.
Query triplet (A - B + C)? berlin germany france
paris 0.896462
bourges 0.768954
louveciennes 0.765569
toulouse 0.761916
valenciennes 0.760251
montpellier 0.752747
strasbourg 0.744487
meudon 0.74143
bordeaux 0.740635
pigneaux 0.736122

The answer provides by our model is Paris, which is correct. Let's have a look at a less obvious example: 
Query triplet (A - B + C)? psx sony nintendo
gamecube 0.803352
nintendogs 0.792646
playstation 0.77344
sega 0.772165
gameboy 0.767959
arcade 0.754774
playstationjapan 0.753473
gba 0.752909
dreamcast 0.74907
famicom 0.745298

Our model considers that the nintendo analogy of a psx is the gamecube, which seems reasonable. Of course the quality of the analogies depend on the dataset used to train the model and one can only hope to cover fields only in the dataset. 

Importance of character n-grams 
Using subword-level information is particularly interesting to build vectors for unknown words. For example, the word gearshift does not exist on Wikipedia but we can still query its closest existing words: 
Query word? gearshift
gearing 0.790762
flywheels 0.779804
flywheel 0.777859
gears 0.776133
driveshafts 0.756345
driveshaft 0.755679
daisywheel 0.749998
wheelsets 0.748578
epicycles 0.744268
gearboxes 0.73986

Most of the retrieved words share substantial substrings but a few are actually quite different, like cogwheel. You can try other words like sunbathe or grandnieces. Now that we have seen the interest of subword information for unknown words, let s check how it compares to a model that do not use subword information. To train a model without no subwords, just run the following command: 
# ./fasttext skipgram -input data/fil9 -output result/fil9-none -maxn 0

The results are saved in result/fil9-non.vec and result/fil9-non.bin

To illustrate the difference, let us take an uncommon word in Wikipedia, like accomodation which is a misspelling of accommodation. Here is the nearest neighbors obtained without no subwords: 
# ./fasttext nn result/fil9-none.bin
Query word? accomodation
sunnhordland 0.775057
accomodations 0.769206
administrational 0.753011
laponian 0.752274
ammenities 0.750805
dachas 0.75026
vuosaari 0.74172
hostelling 0.739995
greenbelts 0.733975
asserbo 0.732465

The result does not make much sense, most of these words are unrelated. On the other hand, using subword information gives the following list of nearest neighbors: 
Query word? accomodation
accomodations 0.96342
accommodation 0.942124
accommodations 0.915427
accommodative 0.847751
accommodating 0.794353
accomodated 0.740381
amenities 0.729746
catering 0.725975
accomodate 0.703177
hospitality 0.701426

The nearest neighbors capture different variation around the word accommodation. We also get semantically related words such as amenities or lodging

Supplement 
Text Classification & Word Representations using F...t (An NLP library by Facebook) 
fasttext - A Python interface for Facebook fastText library

沒有留言:

張貼留言

[Git 常見問題] error: The following untracked working tree files would be overwritten by merge

  Source From  Here 方案1: // x -----删除忽略文件已经对 git 来说不识别的文件 // d -----删除未被添加到 git 的路径中的文件 // f -----强制运行 #   git clean -d -fx 方案2: 今天在服务器上  gi...