Here’s an easy way to find very simple classification rules from a set of instances. Called 1R for 1-rule, it generates a one-level decision tree expressed in the form of a set of rules that all test one particular attribute. 1R is a simple, cheap method that often comes up with quite good rules for characterizing the structure in data. It turns out that simple rules frequently achieve surprisingly high accuracy. Perhaps this is because the structure underlying many real-world datasets is quite rudimentary, and just one attribute is sufficient to determine the class of an instance quite accurately. In any event, it is always a good plan to try the simplest things first.
The idea is this: we make rules that test a single attribute and branch accordingly. Each branch corresponds to a different value of the attribute. It is obvious what is the best classification to give each branch: use the class that occurs most often in the training data. Then the error rate of the rules can easily be determined. Just count the errors that occur on the training data, that is, the number of instances that do not have the majority class.
Each attribute generates a different set of rules, one rule for every value of the attribute. Evaluate the error rate for each attribute’s rule set and choose the best. It’s that simple! Figure 4.1 shows the algorithm in the form of pseudocode.
To see the 1R method at work, consider the weather data of Table 1.2 (we will encounter it many times again when looking at how learning algorithms work). To classify on the final column, play, 1R considers four sets of rules, one for each attribute. These rules are shown in Table 4.1
An asterisk indicates that a random choice has been made between two equally likely outcomes. The number of errors is given for each rule, along with the total number of errors for the rule set as a whole. 1R chooses the attribute that produces rules with the smallest number of errors—that is, the first and third rule sets. Arbitrarily breaking the tie between these two rule sets gives:
- outlook: sunny -> no
- overcast -> yes
- rainy -> yes
Missing values and numeric attributes
Although a very rudimentary learning method, 1R does accommodate both missing values and numeric attributes. It deals with these in simple but effective ways.Missing is treated as just another attribute value so that, for example, if the weather data had contained missing values for the outlook attribute, a rule set formed on outlook would specify four possible class values, one each for sunny, overcast, and rainy and a fourth for missing.
We can convert numeric attributes into nominal ones using a simple discretization method. First, sort the training examples according to the values of the numeric attribute. This produces a sequence of class values. For example, sorting the numeric version of the weather data (Table 1.3) according to the values of temperature produces the sequence:
Discretization involves partitioning this sequence by placing breakpoints in it. One possibility is to place breakpoints wherever the class changes:
- yes | no | yes yes yes | no no | yes yes yes | no | yes yes | no
A more serious problem is that this procedure tends to form a large number of categories. The 1R method will naturally gravitate toward choosing an attribute that splits into many categories, because this will partition the dataset into many classes, making it more likely that instances will have the same class as the majority in their partition. In fact, the limiting case is an attribute that has a different value for each instance—that is, an identification code attribute that pinpoints instances uniquely—and this will yield a zero error rate on the training set because each partition contains just one instance. Of course, highly branching attributes do not usually perform well on test examples; indeed, the identification code attribute will never predict any examples outside the training set correctly. This phenomenon is known as overfitting.
For 1R, overfitting is likely to occur whenever an attribute has a large number of possible values. Consequently, when discretizing a numeric attribute a rule is adopted that dictates a minimum number of examples of the majority class in each partition. Suppose that minimum is set at three. This eliminates all but two of the preceding partitions. Instead, the partitioning process begins
- yes no yes yes | yes...
- yes no yes yes yes | no no yes yes yes | no yes yes no
Whenever adjacent partitions have the same majority class, as do the first two partitions above, they can be merged together without affecting the meaning of the rule sets. Thus the final discretization is:
- yes no yes yes yes no no yes yes yes | no yes yes no
- temperature: <= 77.5 -> yes
- > 77.5 -> no
Surprisingly, despite its simplicity 1R did astonishingly—even embarrassingly—well in comparison with state-of-the-art learning methods, and the rules it produced turned out to be just a few percentage points less accurate, on almost all of the datasets, than the decision trees produced by a state-of-the-art decision tree induction scheme. These trees were, in general, considerably larger than 1R’s rules. Rules that test a single attribute are often a viable alternative to more complex structures, and this strongly encourages a simplicity-first methodology in which the baseline performance is established using simple, rudimentary techniques before progressing to more sophisticated learning methods, which inevitably generate output that is harder for people to interpret.
The 1R procedure learns a one-level decision tree whose leaves represent the various different classes. A slightly more expressive technique is to use a different rule for each class. Each rule is a conjunction of tests, one for each attribute. For numeric attributes the test checks whether the value lies within a given interval; for nominal ones it checks whether it is in a certain subset of that attribute’s values. These two types of tests—intervals and subset—are learned from the training data pertaining to each class. For a numeric attribute, the endpoints of the interval are the minimum and maximum values that occur in the training data for that class. For a nominal one, the subset contains just those values that occur for that attribute in the training data for the class. Rules representing different classes usually overlap, and at prediction time the one with the most matching tests is predicted. This simple technique often gives a useful first impression of a dataset. It is extremely fast and can be applied to very large quantities of data.
Lab Demo
First of all, we have to create out input training data (first line is headers, the one with '*' is target header to predict):
- data/weathers.dat
- Outlook,Temperature,Humidity,Windy,*Play
- sunny,hot,high,false,no
- sunny,hot,high,true,no
- overcast,hot,high,false,yes
- rainy,mild,high,false,yes
- rainy,cool,normal,false,yes
- rainy,cool,normal,true,no
- overcast,cool,normal,true,yes
- sunny,mild,high,false,no
- sunny,cool,normal,false,yes
- rainy,mild,normal,false,yes
- sunny,mild,normal,true,yes
- overcast,mild,high,true,yes
- overcast,hot,normal,false,yes
- rainy,mild,high,true,no
- OneR.groovy
- package dm.basic.oner
- import dm.input.SimpleIn
- import flib.util.CountMap
- import flib.util.Tuple
- class OneR {
- public static void main(args)
- {
- // Reading training data
- SimpleIn si = new SimpleIn(new File("data/weathers.dat"))
- int ei=-1;
- def headers = si.getHeaders()
- ei=headers.findIndexOf { h-> h.startsWith('*')}
- printf("\t[Info] Target Header: %d (%s)\n", ei, headers[ei])
- printf("\t[Info] Start 1R Algorithm...\n")
- def trainMap = [:]
- printf("\t\t0) Initialize...\n")
- headers.size().times{i->
- if(i==ei) return
- def cmm = [:].withDefault { k -> return new CountMap()}
- trainMap[i] = new Tuple(headers[i], [], cmm, new TreeSet())
- }
- printf("\t\t1) Start Analyze data...\n")
- Iterator datIter = si.iterator()
- while(datIter.hasNext())
- {
- def r = datIter.next()
- def p = r[ei]
- for(int i=0; i
- {
- if(i==ei) continue
- Tuple t = trainMap[i]
- t.get(1).add(r[i])
- t.get(2)[r[i]].count(p)
- t.get(3).add(r[i])
- }
- }
- printf("\t\t2) Pick attribute with less error...\n")
- int gErr=Integer.MAX_VALUE
- String attr=null
- for(Tuple t:trainMap.values())
- {
- int err=0
- printf("\t\t\tHeader='%s':\n", t.get(0))
- for(String v:t.get(3))
- {
- CountMap cm = t.get(2)[v]
- def maj = cm.major() /*Major category/predict list*/
- def mc = maj[0] /*Pickup first major category/predict*/
- def mcc = cm.getCount(mc) /*Major category/predict count*/
- def tc = cm.size() /*Total size of this attribute*/
- printf("\t\t\t\tValue='%s'->%s (%d/%d)\n", v, mc, mcc, tc)
- err+=(tc-mcc)
- }
- if(err
- {
- gErr=err
- attr=t.get(0)
- }
- }
- printf("\t\t3) Generate 1R: %s\n", attr)
- }
- }
沒有留言:
張貼留言