This chapter will begin by looking at Elman and Jordan neural networks. These networks are often called simple recurrent neural networks (SRN).
The Elman Neural Network
Elman and Jordan neural networks are recurrent neural networks that have additional layers and function very similarly to the feedforward networks in previous chapters. They use training techniques similar to feedforward neural networks as well. Below Figure 7.1 shows an Elman neural network. The Elman neural network uses context neurons. They are labeled as C1 and C2. The context neurons allow feedback. Feedback is when the output from a previous iteration is used as the input for successive iterations. Notice that the context neurons are fed from hidden neuron output. There are no weights on these connections. They are simply an output conduit from hidden neurons to context neurons. The context neurons remember this output and then feed it back to the hidden neurons on the next iteration. Therefore, the context layer is always feeding the hidden layer its own output from the previous iteration.
The connection from the context layer to the hidden layer is weighted. This synapse will learn as the network is trained. Context layers allow a neural network to recognize context. To see how important context is to a neural network, consider how the previous networks were trained. The order of the training set elements did not really matter. The training set could be jumbled in any way needed and the network would still train in the same manner. With an Elman or a Jordan neural network, the order becomes very important. The training set element previously supported is still affecting the neural network. This becomes very important for predictive neural networks and makes Elman neural networks very useful for temporal neural networks.
Chapter 8 will delve more into temporal neural networks. Temporal networks attempt to see trends in data and predict future data values. Feedforward networks can also be used for prediction, but the input neurons are structured differently. This chapter will focus on how neurons are structured for simple recurrent neural networks.
Dr. Jeffrey Elman created the Elman neural network. Dr. Elman used an XOR pattern to test his neural network. However, he did not use a typical XOR pattern like we’ve seen in previous chapters. He used a XOR pattern collapsed to just one input neuron. Consider the following XOR truth table.
Now, collapse this to a string of numbers. To do this simply read the numbers left-to-right, line-by-line. This produces the following:
We will create a neural network that accepts one number from the above list and should predict the next number. This same data will be used with a Jordan neural network later in this chapter. Sample input to this neural network would be as follows:
It would be impossible to train a typical feedforward neural network for this. The training information would be contradictory. Sometimes an input of 0 results in a 1; other times it results in a 0. An input of 1 has similar issues. The neural network needs context; it should look at what comes before. We will review an example that uses an Elman and a feedforward network to attempt to predict the output. An example of the Elman neural network can be found at ElmanXOR. When run, this program produces the following output:
As you can see, the program attempts to train both a feedforward and an Elman neural network with the temporal XOR data. The feedforward neural network does not learn the data well, but the Elman learns better. In this case, feedforward neural network gets to 50% and Elman neural network gets to 23%. The context layer helps considerably. (This program uses random weights to initialize the neural network. If the first run does not produce good results, try rerunning. A better set of starting weights can help.)
Creating an Elman Neural Network
Calling the createElmanNetwork method creates the Elman neural network in this example. This method is shown here.
- static MLMethod createElmanNetwork() {
- // construct an Elman type network
- ElmanPattern pattern = new ElmanPattern();
- pattern.setActivationFunction(new ActivationTANH());
- pattern.setInputNeurons(1);
- pattern.addHiddenLayer(6);
- pattern.setOutputNeurons(1);
- return pattern.generate();
- }
Training an Elman Neural Network
Elman neural networks tend to be particularly susceptible to local minima. A local minimum is a point where training stagnates. Visualize the weight matrix and thresholds as a landscape with mountains and valleys. To get to the lowest error, you want to find the lowest valley. Sometimes training finds a low valley and searches near this valley for a lower spot. It may fail to find an even lower valley several miles away.
This example’s training uses several training strategies to help avoid this situation. The training code for this example is shown below. The same training routine is used for both the feedforward and Elman networks and uses backpropagation with a very small learning rate. However, adding a few training strategies helps greatly. The trainNetwork method is used to train the neural network. This method is shown here.
- public static double trainNetwork(final String what,
- final MLMethod network, final MLDataSet trainingSet) {
- // train the neural network
- // (1)
- TrainingSetScore score = new TrainingSetScore(trainingSet);
- final MLTrain trainAlt = new NeuralSimulatedAnnealing((BasicNetwork)network, score, 10.0, 2.0, 100);
- final MLTrain trainMain = new Backpropagation((BasicNetwork)network, trainingSet, 0.000001, 0.0);
- // (2)
- ((Propagation)trainMain).setThreadCount(1);
- final StopTrainingStrategy stop = new StopTrainingStrategy();
- // (3)
- trainMain.addStrategy(new Greedy());
- trainMain.addStrategy(new HybridStrategy(trainAlt));
- trainMain.addStrategy(stop);
- // (4)
- int epoch = 0;
- while (!stop.shouldStop()) {
- trainMain.iteration();
- System.out.println("Training " + what + ", Epoch #" + epoch
- + " Error:" + trainMain.getError());
- epoch++;
- }
- return trainMain.getError();
- }
The Jordan Neural Network
Encog also contains a pattern for a Jordan neural network. The Jordan neural network is very similar to the Elman neural network. Figure 7.2 shows a Jordan neural network.
As you can see, a context neuron is used and is labeled C1, similar to the Elman network. However, the output from the output layer is fed back to the context layer, rather than the hidden layer. This small change in the architecture can make the Jordan neural network better for certain temporal prediction tasks.
The Jordan neural network has the same number of context neurons as it does output neurons. This is because the context neurons are fed from the output neurons. The XOR operator has only one output neuron. This leaves you with a single context neuron when using the Jordan neural network for XOR. Jordan networks work better with a larger number of output neurons. To construct a Jordan neural network, the JordanPattern should be used. The following code demonstrates this.
- static BasicNetwork createJordanNetwork() {
- // construct an Elman type network
- JordanPattern pattern = new JordanPattern();
- pattern.setActivationFunction(new ActivationTANH());
- pattern.setInputNeurons(1);
- pattern.addHiddenLayer(6);
- pattern.setOutputNeurons(1);
- return (BasicNetwork)pattern.generate();
- }
Encog includes an example XOR network that uses the Jordan neural network. This example is included mainly for completeness for comparison of Elman and Jordan on the XOR operator. As previously mentioned, Jordan tends to do better when there are a larger number of output neurons. The Encog XOR example for Jordan will not be able to train to a very low error rate and does not perform noticeably better than a feedforward neural network. The Jordan example can be found at JordanXOR.
The ART1 Neural Network
The ART1 neural network is a type of Adaptive Resonance Theory (ART) neural network. ART1, developed by Stephen Grossberg and Gail Carpenter, supports only bipolar input. The ART1 neural network is trained as it is used and is used for classification. New patterns are presented to the ART1 network and are classified into either new or existing classes. Once the maximum number of classes has been used, the network will report that it is out of classes.
An ART1 network appears as a simple two-layer neural network. However, unlike a feedforward neural network, there are weights in both directions between the input and output layers. The input neurons are used to present patterns to the ART1 network. ART1 uses bipolar numbers, so each input neuron is either on or off. A value of one represents on, and a value of negative one represents off. The output neurons define the groups that the ART1 neural network will recognize. Each output neuron represents one group.
Using the ART1 Neural Network
We will now see how to actually make use of an ART1 network. The example presented here will create a network that is given a series of patterns to learn to recognize. This example can be found at NeuralART1. This example constructs an ART1 network. This network will be presented new patterns to recognize and learn. If a new pattern is similar to a previous pattern, then the new pattern is identified as belonging to the same group as the original pattern. If the pattern is not similar to a previous pattern, then a new group is created. If there is already one group per output neuron, then the neural network reports that it can learn no more patterns. The output from this example can be seen here.
The above output shows that the neural network is presented with patterns. The number to the right indicates in which group the ART1 network placed the pattern. Some patterns are grouped with previous patterns while other patterns form new groups. Once all of the output neurons have been assigned to a group, the neural network can learn no more patterns. Once this happens, the network reports that all classes have been exhausted.
First, an ART1 neural network must be created. This can be done with the following code.
- ART1 logic = new ART1(INPUT_NEURONS,OUTPUT_NEURONS);
- public void setupInput() {
- this.input = new boolean[PATTERN.length][INPUT_NEURONS];
- for (int n = 0; n < PATTERN.length; n++) {
- for (int i = 0; i < INPUT_NEURONS; i++) {
- this.input[n][i] = (PATTERN[n].charAt(i) == 'O');
- }
- }
- }
- public void run() {
- this.setupInput();
- ART1 logic = new ART1(INPUT_NEURONS,OUTPUT_NEURONS);
- for (int i = 0; i < PATTERN.length; i++) {
- // (1)
- BiPolarNeuralData in = new BiPolarNeuralData(this.input[i]);
- BiPolarNeuralData out = new BiPolarNeuralData(OUTPUT_NEURONS);
- // (2)
- logic.compute(in, out);
- if (logic.hasWinner()) { // (3)
- System.out.println(PATTERN[i] + " - " + logic.getWinner());
- } else { // (4)
- System.out.println(PATTERN[i]
- + " - new Input and all Classes exhausted");
- }
- }
- }
The ART1 is a network that can be used to cluster data on the fly. There is no distinct learning phase; it will cluster data as it is received.
The NEAT Neural Network
NeuroEvolution of Augmenting Topologies (NEAT) is a Genetic Algorithm for evolving the structure and weights of a neural network. NEAT was developed by Ken Stanley at The University of Texas at Austin. NEAT relieves the neural network programmer of the tedious task of figuring out the optimal structure of a neural network’s hidden layer.
A NEAT neural network has an input and output layer, just like the more common feedforward neural networks. A NEAT network starts with only an input layer and output layer. The rest is evolved as the training progresses. Connections inside of a NEAT neural network can be feedforward, recurrent, or self-connected. All of these connection types will be tried by NEAT as it attempts to evolve a neural network capable of the given task.
As you can see, the above network has only an input and output layers. This is not sufficient to learn XOR. These networks evolve by adding neurons and connections. Below shows a neural network that has evolved to process the XOR operator.
The above network evolved from the previous network. An additional hidden neuron was added between the first input neuron and the output neuron. Additionally, a recurrent connection was made from the output neuron back to the first hidden neuron. These minor additions allow the neural network to learn the XOR operator. The connections and neurons are not the only things being evolved. The weights between these neurons were evolved as well.
As shown in Figure 7.4, a NEAT network does not have clearly defined layers like traditional feed forward networks. There is a hidden neuron, but not really a hidden layer. If this were a traditional hidden layer, both input neurons would be connected to the hidden neuron. NEAT is a complex neural network type and training method. Additionally, there is a new version of NEAT, called HyperNEAT. Complete coverage of NEAT is beyond the scope of this book. I will likely release a future book on focused on Encog application of NEAT and HyperNEAT. This section will focus on how to use NEAT as a potential replacement for a feedforward neural network, providing you all of the critical information for using NEAT with Encog.
Creating an Encog NEAT Population
This section will show how to use a NEAT network to learn the XOR operator. There is very little difference between the code in this example that used for a feedforward neural network to learn the XOR operator. One of Encog’s core objectives is to make machine learning methods as interchangeable as possible. You can see this example at XORNEAT.
- package org.encog.examples.neural.neat;
- import org.encog.Encog;
- import org.encog.ml.CalculateScore;
- import org.encog.ml.data.MLDataSet;
- import org.encog.ml.data.basic.BasicMLDataSet;
- import org.encog.ml.ea.train.EvolutionaryAlgorithm;
- import org.encog.neural.neat.NEATNetwork;
- import org.encog.neural.neat.NEATPopulation;
- import org.encog.neural.neat.NEATUtil;
- import org.encog.neural.networks.training.TrainingSetScore;
- import org.encog.util.simple.EncogUtility;
- public class XORNEAT {
- public static double XOR_INPUT[][] = { { 0.0, 0.0 }, { 1.0, 0.0 },
- { 0.0, 1.0 }, { 1.0, 1.0 } };
- public static double XOR_IDEAL[][] = { { 0.0 }, { 1.0 }, { 1.0 }, { 0.0 } };
- public static void main(final String args[]) {
- // (1)
- MLDataSet trainingSet = new BasicMLDataSet(XOR_INPUT, XOR_IDEAL);
- // (2)
- NEATPopulation pop = new NEATPopulation(2, 1, 1000);
- pop.setInitialConnectionDensity(1.0);// not required, but speeds
- // training
- pop.reset();
- // (3)
- CalculateScore score = new TrainingSetScore(trainingSet);
- // (4) train the neural network
- final EvolutionaryAlgorithm train = NEATUtil.constructNEATTrainer(pop,score);
- do {
- train.iteration();
- System.out.println("Epoch #" + train.getIteration() + " Error:"
- + train.getError() + ", Species:"
- + pop.getSpecies().size());
- } while (train.getError() > 0.01);
- NEATNetwork network = (NEATNetwork) train.getCODEC().decode(
- train.getBestGenome());
- // test the neural network
- System.out.println("Neural Network Results:");
- EncogUtility.evaluate(network, trainingSet);
- Encog.getInstance().shutdown();
- }
- }
(3). Earlier we said that only the fit members of the population are allowed to breed to create the next generations; (4). One final required step is to set the evolutionary algorithm which implement interface EvolutionaryAlgorithm. Here leverage class NEATUtil to construct one for us.
Now that the population has been created, it must be trained.
Training an Encog NEAT Neural Network
Training a NEAT neural network is very similar to training any other neural network in Encog: create a training object and begin looping through iterations. As these iterations progress, the quality of the neural networks in the population should increase. A NEAT neural network is trained with the class implementing TrainEA interface. Here you can see a EvolutionaryAlgorithm object being created through factory method of NEATUtil:
- final EvolutionaryAlgorithm train = NEATUtil.constructNEATTrainer(pop, score);
- EncogUtility.trainToError((MLTrain)train, 0.01);
- NEATNetwork network = (NEATNetwork) train.getCODEC().decode(train.getBestGenome());
- // test the neural network
- System.out.println("Neural Network Results:");
- EncogUtility.evaluate(network, trainingSet);
- Encog.getInstance().shutdown();
If you want to process single record of input data, you can try below way:
- MLData outData = network.compute(new BasicMLData(XOR_INPUT[0]));
- System.out.printf("Input=[0, 0] with Output=%.03f\n", outData.getData(0));
- outData = network.compute(new BasicMLData(XOR_INPUT[1]));
- System.out.printf("Input=[1, 0] with Output=%.03f\n", outData.getData(0));
The network has learned the XOR operator from the above results. XOR will produce an output of 1.0 only when the two inputs are not both of the same value.
沒有留言:
張貼留言