## 2016年12月14日 星期三

### [ NNF For Java ] More Supervised Training (Ch6)

Preface
• Introducing the Lunar Lander Example
• Supervised Training without Training Sets
• Using Genetic Algorithms
• Using Simulated Annealing
• Genetic Algorithms and Simulated Annealing with Training Sets

So far, this book has only explored training a neural network by using the supervised propagation training methods. This chapter will look at some nonpropagation training techniques. The neural network in this chapter will be trained without a training set. It is still supervised in that feedback from the neural network’s output is constantly used to help train the neural network. We simply will not supply training data ahead of time.

Two common techniques for this sort of training are simulated annealing and genetic algorithms. Encog provides built-in support for both. The example in this chapter can be trained with either algorithm, both of which will be discussed later in this chapter. The example in this chapter presents the classic “Lunar Lander” game. This game has been implemented many times and is almost as old as computers themselves.

The idea behind most variants of the Lunar Lander game is very similar and the example program works as follows: The lunar lander spacecraft will begin to fall. As it falls, it accelerates. There is a maximum velocity that the lander can reach, which is called the ‘terminal velocity.’ Thrusters can be applied to the lander to slow its descent. However, there is a limited amount of fuel. Once the fuel is exhausted, the lander will simply fall, and nothing can be done.

This chapter will teach a neural network to pilot the lander. This is a very simple text-only simulation. The neural network will have only one option available to it. It can either decide to fire the thrusters or not to fire the thrusters. No training data will be created ahead of time and no assumptions will be made about how the neural network should pilot the craft. If using training sets, input would be provided ahead of time regarding what the neural network should do in certain situations. For this example, the neural network will learn everything on its own.

Even though the neural network will learn everything on its own, this is still supervised training. The neural network will not be totally left to its own devices. It will receive a way to score the neural network. To score the neural network, we must give it some goals and then calculate a numeric value that determines how well the neural network achieved its goals.

These goals are arbitrary and simply reflect what was picked to score the network. The goals are summarized here:
• Land as softly as possible
• Cover as much distance as possible
• Conserve fuel

The first goal is not to crash, but to try to hit the lunar surface as softly as possible. Therefore, any velocity at the time of impact is a very big negative score. The second goal for the neural network is to try to cover as much distance as possible while falling. To do this, it needs to stay aloft as long as possible and additional points are awarded for staying aloft longer. Finally, bonus points are given for still having fuel once the craft lands. The score calculation can be seen in Equation 6.1. In the next section we will run the Lunar Lander example and observe as it learns to land a spacecraft.

Running the Lunar Lander Example
To run the Lunar Lander game you should execute the LunarLander class. This class requires no arguments. Once the program begins, the neural network immediately begins training. It will cycle through 50 epochs, or training iterations, before it is done. When it first begins, the score is a negative number. These early attempts by the untrained neural network are hitting the moon at high velocity and are not covering much distance.
Epoch #1 Score:5281.0
Epoch #2 Score:5281.0
Epoch #3 Score:5758.0
...
Epoch #7 Score:6929.0
// After the seventh epoch, the score begins to increase.
Epoch #8 Score:6929.0
Epoch #9 Score:6929.0
Epoch #10 Score:6929.0
Epoch #11 Score:6929.0
Epoch #12 Score:10009.0
...
Epoch #49 Score:10009.0
Epoch #50 Score:10009.0
...

The training techniques used in this chapter make extensive use of random numbers. As a result, running this example multiple times may result in entirely different scores. More epochs may have produced a better-trained neural network; however, the program limits it to 50. This number usually produces a fairly skilled neural pilot. Once the network is trained, run the simulation with the winning pilot. The telemetry is displayed at each second.

The neural pilot kept the craft aloft for 911 seconds. So, we will not show every telemetry report. However, some of the interesting actions that this neural pilot learned are highlighted. The neural network learned it was best to just let the craft free-fall for awhile.
How the winning network landed:
Elapsed: 1 s, Fuel: 200 l, Velocity: -1.6200 m/s, 9998 m
Elapsed: 2 s, Fuel: 200 l, Velocity: -3.2400 m/s, 9995 m
Elapsed: 3 s, Fuel: 200 l, Velocity: -4.8600 m/s, 9990 m
...
Elapsed: 171 s, Fuel: 200 l, Velocity: -40.0000 m/s, 3396 m
THRUST
Elapsed: 172 s, Fuel: 199 l, Velocity: -31.6200 m/s, 3355 m

You can see that 171 seconds in and 3,396 meters above the ground, the terminal velocity of -40 m/s has been reached. There is no real science behind -40 m/s being the terminal velocity; it was just chosen as an arbitrary number. Having a terminal velocity is interesting because the neural networks learn that once this is reached, the craft will not speed up. They use the terminal velocity to save fuel and “break their fall” when they get close to the surface. The freefall at terminal velocity continues for some time. Finally, the thrusters are fired for the first time.
...
Elapsed: 1042 s, Fuel: 36 l, Velocity: -40.0000 m/s, 164 m
THRUST
Elapsed: 1043 s, Fuel: 35 l, Velocity: -31.6200 m/s, 123 m
Elapsed: 1044 s, Fuel: 35 l, Velocity: -33.2400 m/s, 89 m
THRUST
Elapsed: 1045 s, Fuel: 34 l, Velocity: -24.8600 m/s, 55 m
THRUST
Elapsed: 1046 s, Fuel: 33 l, Velocity: -16.4800 m/s, 28 m
THRUST
Elapsed: 1047 s, Fuel: 32 l, Velocity: -8.1000 m/s, 10 m
THRUST
Elapsed: 1048 s, Fuel: 31 l, Velocity: 0.2800 m/s, 0 m
THRUST
Elapsed: 1049 s, Fuel: 30 l, Velocity: 8.6600 m/s, 0 m
10009

Finally, the craft lands, with a very soft velocity of positive 8.66. You wonder why the lander lands with a velocity of 8.66. This is due to a slight glitch in the program. This “glitch” is left in because it illustrates an important point: when neural networks are allowed to learn, they are totally on their own and will take advantage of everything they can find.

The final positive velocity is because the program decides if it wants to thrust as the last part of a simulation cycle. The program has already decided the craft’s altitude is below zero, and it has landed. But the neural network
“sneaks in” that one final thrust, even though the craft is already landed and this thrust does no good. However, the final thrust does increase the score of the neural network.

Recall equation 6.1. For every negative meter per second of velocity at landing, the program score is decreased by 1,000. The program figured out that the opposite is also true. For every positive meter per second of velocity, it also gains 1,000 points. By learning about this little quirk in the program, the neural pilot can obtain even higher scores. The neural pilot learned some very interesting things without being fed a pre-devised strategy. The network learned what it wanted to do. Specifically, this pilot decided the following:
• Free-fall for some time to take advantage of terminal velocity.
• At a certain point, break the freefall and slow the craft.
• Slowly lose speed as you approach the surface.
• Give one final thrust, after landing, to maximize score.

The neural pilot in this example was trained using a genetic algorithm. Genetic algorithms and simulated annealing will be discussed later in this chapter. First, we will see how the Lander was simulated and how its score is actually calculated.

Examining the Lunar Lander Simulator
We will now examine how the Lunar Lander example was created by physical simulation and how the neural network actually pilots the spacecraft. Finally, we will see how the neural network learns to be a better pilot.

Simulating the Lander
First, we need a class that will simulate the “physics” of lunar landing. The term “physics” is used very loosely. The purpose of this example is more on how a neural network adapts to an artificial environment than any sort of realistic physical simulation. All of the physical simulation code is contained in the LanderSimulator class. This class begins by defining some constants that will be important to the simulation.
1. public static final double GRAVITY = 1.62;
2. public static final double THRUST = 10;
3. public static final double TERMINAL_VELOCITY = 40;
The GRAVITY constant defines the acceleration on the moon that is due to gravity. It is set to 1.62 and is measured in meters per second. The THRUST constant specifies how the number of meters per second by which the gravity acceleration will be countered. The TERMINAL VELOCITY is the fastest speed that the spacecraft can travel either upward or downward. In addition to these constants, the simulator program will need several instance variables to maintain state. These variables are listed below as follows:
1. private int fuel;
2. private int seconds;
3. private double altitude;
4. private double velocity;
The fuel variable holds the amount of fuel remaining. The seconds variable holds the number of seconds aloft. The altitude variable holds the current altitude in meters. The velocity variable holds the current velocity. Positive numbers indicate that the craft is moving upwards. Negative numbers indicate that the craft is moving downwards.

The simulator sets the values to reasonable starting values in the following constructor:
1. class LanderSimulator {
2.         ...
3.     public LanderSimulator() {
4.         this.fuel = 200;
5.         this.seconds = 0;
6.         this.altitude = 10000;
7.         this.velocity = 0;
8.     }
9.         ...
10. }
The craft starts with 200 liters and the altitude is set to 10,000 meters above ground. The turn method processes each “turn.” A turn is one second in the simulator. The thrust parameter indicates whether the spacecraft wishes to thrust during this turn.
1. public void turn(boolean thrust) {
2.                // (1)
3.     this.seconds++;
4.     this.velocity -= GRAVITY;
5.                // (2)
6.     this.altitude += this.velocity;
7.
8.                // (3)
9.     if (thrust && this.fuel > 0) {
10.         this.fuel--;
11.         this.velocity += THRUST;
12.     }
13.
14.                // (4)
15.     this.velocity = Math.max(-TERMINAL_VELOCITY, this.velocity);
16.     this.velocity = Math.min(TERMINAL_VELOCITY, this.velocity);
17.
18.                // (5)
19.     if (this.altitude < 0)
20.         this.altitude = 0;
21. }
(1). First, increase the number of seconds elapsed by one. Decrease the velocity by the GRAVITY constant to simulate the fall; (2). The current velocity increases the altitude. Of course, if the velocity is negative,
the altitude will decrease; (3). If thrust is applied during this turn, then decrease the fuel by one and increase the velocity by the THRUST constant; (4). Terminal velocity must be imposed as it cannot fall or ascend faster than
the terminal velocity. The following line makes sure that the lander is not ascending faster than the terminal velocity; (5). The following line makes sure that the altitude does not drop below zero. It is important to prevent the simulation of the craft hitting so hard that it goes underground.

In addition to the simulation code, the LanderSimulator also provides two utility functions. The first calculates the score and should only be called after the spacecraft lands. This method is shown here.
1. public int score() {
2.     return (int) ((this.fuel * 10) + this.seconds + (this.velocity * 1000));
3. }
The score method implements Equation 6.1. As you can see, it uses fuelseconds and velocity to calculate the score according to the earlier equation. Additionally, a method is provided to determine if the spacecraft is still flying. If the altitude is greater than zero, it is still flying.
1. public boolean flying() {
2.     return (this.altitude > 0);
3. }
In the next section, we will see how the neural network actually flies the spacecraft and is given a score.

Calculating the Score
The PilotScore class implements the code necessary for the neural network to fly the spacecraft. This class also calculates the final score after the craft has landed. This class is shown in Listing 6.1.
1. package org.encog.examples.neural.lunar
2.
3. import org.encog.ml.CalculateScore;
4. import org.encog.ml.MLMethod;
5. import org.encog.neural.networks.BasicNetwork;
6.
7. // (1)
8. class PilotScore implements CalculateScore{
9.         // (2)
10.     @Override
11.     public double calculateScore(MLMethod network) {
12.         NeuralPilot pilot = new NeuralPilot((BasicNetwork)network, false);
13.         return pilot.scorePilot();
14.     }
15.
16.         // (3)
17.     public boolean shouldMinimize() {
18.         return false;
19.     }
20.
21.     @Override
22.     public boolean requireSingleThreaded() {
23.         return false;
24.     }
25. }
Listing 6.1: Calculating the Lander Score

(1). As you can see from the following line, the PilotScore class implements the CalculateScore interface which is used by both Encog simulated annealing and genetic algorithms to determine how effective a neural network is at solving the given problem. A low score could be either bad or good depending on the problem; (2). The CalculateScore interface requires two methods. This first method is named calculateNetworkScore. This method accepts a neural network and returns a double that represents the score of the network; (3). The second method returns a value to indicate if the score should be minimized. For this example we would like to maximize the score. As a result the shouldMinimize method returns false.

Flying the Spacecraft
This section shows how the neural network actually flies the spacecraft. The neural network will be fed environmental information such as fuel remaining, altitude and current velocity. The neural network will then output a single value that will indicate if the neural network wishes to thrust. The NeuralPilot class performs this flight.

The NeuralPilot constructor sets up the pilot to fly the spacecraft. The constructor is passed a network to fly the spacecraft, as well as a Boolean that indicates if telemetry should be tracked to the screen.
1. class NeuralPilot {
2.     public NeuralPilot(BasicNetwork network, boolean track)
3.     {
4.          // (1)
5.          fuelStats = new NormalizedField(NormalizationAction.Normalize, "fuel"2000, -0.90.9);
6.          altitudeStats = new NormalizedField(NormalizationAction.Normalize, "altitude"100000, -0.90.9);
7.          velocityStats = new NormalizedField(NormalizationAction.Normalize, "velocity", LanderSimulator.TERMINAL_VELOCITY, -LanderSimulator.TERMINAL_VELOCITY, -0.90.9);
8.
9.          // (2)
10.          this.track = track;
11.          this.network = network;
12.     }
13.     ...
14. }
(1). The lunar lander must feed the fuel level, altitude and current velocity to the neural network. These values must be normalized as was covered in Chapter 2. To perform this normalization, the constructor begins by setting several normalization fields; (2). In addition to the normalized fields, we will also save the operating parameters. The track variable is saved to the instance level so that the program will later know if it should display telemetry.

The neural pilot will have three input neurons and one output neuron. These three input neurons will communicate the following three fields to the neural network.
• Current fuel level
• Current altitude
• Current velocity

These three input fields will produce one output field that indicates if the neural pilot would like to fire the thrusters.

To normalize these three fields, define them as three NormalizedField objects. First, set up the fuel.
1. fuelStats = new NormalizedField(NormalizationAction.Normalize, "fuel"2000, -0.90.9);
We know that the range is between 0 and 200 for the fuel. We will normalize this to the range of -0.9 to 0.9. This is very similar to the range -1 to 1, except it does not take the values all the way to the extreme. This will sometimes help the neural network to learn better. Especially when the full range is known.

Next velocity and altitude are set up.
1. altitudeStats = new NormalizedField(NormalizationAction.Normalize, "altitude"100000, -0.90.9);
2. velocityStats = new NormalizedField(NormalizationAction.Normalize, "velocity", LanderSimulator.TERMINAL_VELOCITY, -LanderSimulator.TERMINAL_VELOCITY, -0.90.9);
Velocity and altitude both have known ranges just like fuel. As a result, velocity is set up similarly to fuel and altitude. Because we do not have training data, it is very important that we know the ranges. This is unlike the examples in Chapter 2 that provided sample data to determine minimum and maximum values.

For this example, the primary purpose of flying the spacecraft is to receive a score. The scorePilot method calculates this score. It will simulate a flight from the point that the spacecraft is dropped from the orbiter to the point that it lands. The scorePilot method calculates this score:
1. public int scorePilot()
2. {
3.     // (1)
4.     LanderSimulator sim = new LanderSimulator();
5.
6.     // (2)
7.     while(sim.flying())
8.     {
9.         // (3)
10.         MLData input = new BasicMLData(3);
11.         input.setData(0this.fuelStats.normalize(sim.getFuel()));
12.         input.setData(1this.altitudeStats.normalize(sim.getAltitude()));
13.         input.setData(2this.velocityStats.normalize(sim.getVelocity()));
14.
15.         // (4)
16.         MLData output = this.network.compute(input);
17.         double value = output.getData(0);
18.
19.         // (5)
20.         boolean thrust;
21.         if( value > 0 )
22.         {
23.             thrust = true;
24.             if( track ) println("THRUST");
25.         }
26.         else
27.             thrust = false;
28.         // (6)
29.         sim.turn(thrust);
30.         if( track ) println(sim.telemetry());
31.     }
32.     // (7)
33.     return(sim.score());
34. }
(1). This method begins by creating a LanderSimulator object to simulate the very simple physics used by this program; (2). We now enter the main loop of the scorePilot method. It will continue looping as long as the spacecraft is still flying. The spacecraft is still flying as long as its altitude is greater than zero; (3). Begin by creating an array to hold the raw data that is obtained directly from the simulator. The normalize method of the NormalizedField object is used to actually normalize the files of fuel, altitude and velocity; (4). This single output neuron will determine if the thrusters should be fired; (5). If the value is greater than zero, then the thrusters will be fired. If the spacecraft is tracking, then also display that the thrusters were fired; (6). Process the next “turn” in the simulator and thrust if necessary. Also display telemetry if the spacecraft is tracking; (7). The spacecraft has now landed. Return the score based on the criteria previously discussed.

We will now look at how to train the neural pilot.

Training the Neural Pilot
This example can train the neural pilot using either a genetic algorithm or simulated annealing. Encog treats both genetic algorithms and simulated annealing very similarly. On one hand, you can simply provide a training set and use simulated annealing or you can use a genetic algorithm just as in a propagation network. We will see an example of this later in the chapter as we apply these two techniques to the XOR problem. This will show how similar they can be to propagation training.

On the other hand, genetic algorithms and simulated annealing can do something that propagation training cannot. They can allow you to train without a training set. It is still supervised training since a scoring class is used, as developed earlier in this chapter. However, it still does not need to training data input. Rather, the neural network needs input on how good of a job it is doing. If you can provide this scoring function, simulated annealing or a genetic algorithm can train the neural network. Both methods will be discussed in the coming sections, beginning with a genetic algorithm.

What is a Genetic Algorithm
Genetic algorithms attempt to simulate Darwinian evolution to create a better neural network. The neural network is reduced to an array of double variables. This array becomes the genetic sequence. The genetic algorithm begins by creating a population of random neural networks. All neural networks in this population have the same structure, meaning they have the same number of neurons and layers. However, they all have different random weights.

These neural networks are sorted according their “scores.” Their scores are provided by the scoring method as discussed in the last section. In the case of the neural pilot, this score indicates how softly the ship landed. The top neural networks are selected to “breed.” The bottom neural networks “die.” When two networks breed, nature is simulated by splicing their DNA. In this case, splices are taken from the double array from each network and spliced together to create a new offspring neural network. The offspring neural networks take up the places vacated by the dying neural networks.

Some of the offspring will be “mutated.” That is, some of the genetic material will be random and not from either parent. This introduces needed variety into the gene pool and simulates the natural process of mutation. The population is sorted and the process begins again. Each iteration provides one cycle. As you can see, there is no need for a training set. All that is needed is an object to score each neural network. Of course you can use training sets by simply providing a scoring object that uses a training set to score each network.

Using a Genetic Algorithm
Using the genetic algorithm is very easy and uses the NeuralGeneticAlgorithm class to do this. The NeuralGeneticAlgorithm class implements the MLTrain interface. Therefore, once constructed, it is used in the same way as any other Encog training class. The following code creates a new NeuralGeneticAlgorithm to train the neural pilot.
1. final MLTrain train = new NeuralGeneticAlgorithm(network, new NguyenWidrowRandomizer(), new PilotScore(), 5000.10.25)
From Encog 3.3, you have to initialize GA this way by using MLMethodGeneticAlgorithm (reference):
1. train = new MLMethodGeneticAlgorithm(new MethodFactory(){
2.     @Override
3.     public MLMethod factor() {
4.         final BasicNetwork result = createNetwork();
5.         ((MLResettable)result).reset();
6.         return result;
7.     }},new PilotScore(),500);
The base network is provided to communicate the structure of the neural network to the genetic algorithm. The genetic algorithm will disregard weights currently set by the neural network. The randomizer is provided so that the neural network can create a new random population. The NguyenWidrowRandomizer attempts to produce starting weights that are less extreme and more trainable than the regular RangeRandomizer that is usually used. However, either randomizer could be used.

The value of 500 specifies the population size. Larger populations will train better, but will take more memory and processing time. The 0.1 is used to mutate 10% of the offspring. The 0.25 value is used to choose the mating population from the top 25% of the population. Now that the trainer is set up, the neural network is trained just like any Encog training object. Here we only iterate 50 times. This is usually enough to produce a skilled neural pilot. Below is the code snippet on the training part:
1. ...
2. public static void main(String args[])
3.     {
4.         BasicNetwork network = createNetwork();
5.
6.         MLMethodGeneticAlgorithm train;
7.
8.         train = new MLMethodGeneticAlgorithm(new MethodFactory(){
9.             @Override
10.             public MLMethod factor() {
11.                 final BasicNetwork result = createNetwork();
12.                 ((MLResettable)result).reset();
13.                 return result;
14.             }},new PilotScore(),500);
15.
16.         try {
17.             int epoch = 1;
18.
19.             for(int i=0;i<50;i++) {
20.                 train.iteration();
21.                 System.out
22.                         .println("Epoch #" + epoch + " Score:" + train.getError());
23.                 epoch++;
24.             }
25.             train.finishTraining();
26.
27.             // Round trip the GA and then train again
28.             LunarLander.saveMLMethodGeneticAlgorithm("trainer.bin",train);
29.             train = LunarLander.loadMLMethodGeneticAlgorithm("trainer.bin");
30.
31.             // Train again
32.             for(int i=0;i<50;i++) {
33.                 train.iteration();
34.                 System.out
35.                         .println("Epoch #" + epoch + " Score:" + train.getError());
36.                 epoch++;
37.             }
38.             train.finishTraining();
39.
40.         } catch(IOException ex) {
41.             ex.printStackTrace();
42.         } catch (ClassNotFoundException e) {
43.             // TODO Auto-generated catch block
44.             e.printStackTrace();
45.         }
46.
47.         int epoch = 1;
48.
49.         for(int i=0;i<50;i++) {
50.             train.iteration();
51.             System.out
52.                     .println("Epoch #" + epoch + " Score:" + train.getError());
53.             epoch++;
54.         }
55.         train.finishTraining();
56.
57.         System.out.println("\nHow the winning network landed:");
58.         network = (BasicNetwork)train.getMethod();
59.         NeuralPilot pilot = new NeuralPilot(network,true);
60.         System.out.println(pilot.scorePilot());
61.         Encog.getInstance().shutdown();
62.     }
63. }
This neural network could have also trained using the EncogUtility class, as in the previous chapter. Just for simple training, the EncogUtility is usually the preferred method. However, if your program needs to do something after each iteration, the more manual approach shown above may be preferable.

What is Simulated Annealing
Simulated annealing can also be used to train the neural pilot. Simulated annealing is similar to a genetic algorithm in that it needs a scoring object. However, it works quite differently internally. Simulated annealing simulates the metallurgical process of annealing. Annealing is the process by which a very hot molten metal is slowly cooled. This slow cooling process causes the metal to produce a strong, consistent molecular structure. Annealing is a process that produces metals less likely to fracture or shatter.

A similar process can be performed on neural networks. To implement simulated annealing, the neural network is converted to an array of double values. This is exactly the same process as was done for the genetic algorithm.Randomness is used to simulate the heat and cooling effect. While the neural network is still really “hot,” the neural network’s existing weights increase in speed. As the network cools, this randomness slows down. Only changes that produce a positive effect on the network’s score are kept.

Using Simulated Annealing
To use simulated annealing to train the neural pilot, pass the argument anneal on the command line when running this example. It is very simple for the example to use annealing rather than a genetic algorithm. They both use the same scoring function and are interchangeable. The following lines of code make use of the simulated annealing algorithm for this example.
1. if( args.length>0 && args.equalsIgnoreCase("anneal"))
2. {
3.     train = new NeuralSimulatedAnnealing(network, new PilotScore(), 102100);
4. }
The simulated annealing object NeuralSimulatedAnnealing is used to train the neural pilot. The neural network is passed along with the same scoring object that was used to train using a genetic algorithm. The values of 10 and 2 are the starting and stopping temperatures, respectively. They are not true temperatures, in terms of Fahrenheit or Celsius A higher number will produce more randomness; a lower number produces less randomness. The following code shows how this temperature or factor is applied.
1. /**
2. * Randomize the weights and bias values. This function does most of the
3. * work of the class. Each call to this class will randomize the data
4. * according to the current temperature. The higher the temperature the more
5. * randomness.
6. */
7. public void randomize() {
8.     final double[] array = NetworkCODEC
9.             .networkToArray(NeuralSimulatedAnnealing.this.network);
10.
11.     for (int i = 0; i < array.length; i++) {
12.         double add = NeuralSimulatedAnnealing.CUT - Math.random();
13.         add /= this.anneal.getStartTemperature();
14.         add *= this.anneal.getTemperature();
15.         array[i] = array[i] + add;
16.     }
17.
18.     NetworkCODEC.arrayToNetwork(array,
19.             NeuralSimulatedAnnealing.this.network);
20. }
The number 100 specifies how many cycles, per iteration, that it should take to go from the higher temperature to the lower temperature. Generally, the more cycles, the more accurate the results will be. However, the higher the number, the longer it takes to train. There are no simple rules for how to set these values. Generally, it is best to experiment with different values to see which trains your particular neural network the best.

Using the Training Set Score Class
Training sets can also be used with genetic algorithms and simulated annealing. Used this way, simulated annealing and genetic algorithms are a little different than propagation training based on usage. There is no scoring function when used this way. You simply use the TrainingSetScore object, which takes the training set and uses it to score the neural network.

Generally resilient propagation will outperform genetic algorithms or simulated annealing when used in this way. Genetic algorithms or simulated annealing really excel when using a scoring method instead of a training set. Furthermore, simulated annealing can sometimes to push backpropagation out of a local minimum. The Hello World application could easily be modified to use a genetic algorithm or simulated annealing. To change the above example to use a genetic algorithm, a few lines must be added. The following lines create a training set-based genetic algorithm. First, create a TrainingSetScore object.
1. // create training data
2.         MLDataSet trainingSet = new BasicMLDataSet(XOR_INPUT, XOR_IDEAL);
3.         CalculateScore score = new TrainingSetScore(trainingSet)
This object can then be used with either a genetic algorithm or simulated annealing. The following code shows it being used with a genetic algorithm:
1. // train the neural network
2. MLTrain train = new MLMethodGeneticAlgorithm(new MethodFactory(){
3.           @Override
4.           public MLMethod factor() {
5.               final BasicNetwork result = XORHelloWorld.createNetwork();
6.               ((MLResettable)result).reset();
7.               return result;
8.           }},score,500);
To use the TrainingSetScore object with simulated annealing, simply pass it to the simulated annealing constructor, as was done above.

### [ Python 文章收集 ] SQLAlchemy Core - Using Functions (7)

Source From   Here   Using Functions   Standard SQL has recommended many functions which are implemented by most dialects. They return a si...