## 2016年12月19日 星期一

### [ NNF For Java ] Using Temporal Data (Ch8)

Preface
• How a Predictive Neural Network Works
• Using the Encog Temporal Dataset
• Attempting to Predict Sunspots
• Using the Encog Market Dataset
• Attempting to Predict the Stock Market

Prediction is another common use for neural networks. A predictive neural network will attempt to predict future values based on present and past values. Such neural networks are called temporal neural networks because they operate over time. This chapter will introduce temporal neural networks and the support classes that Encog provides for them.

In this chapter, you will see two applications of Encog temporal neural networks. First, we will look at how to use Encog to predict sunspots. Sunspots are reasonably predictable and the neural network should be able to learn future patterns by analyzing past data. Next, we will examine a simple case of applying a neural network to making stock market predictions.

Before we look at either example we must see how a temporal neural network actually works. A temporal neural network is usually either a feedforward or simple recurrent network. Structured properly, the feedforward neural networks shown so far could be structured as a temporal neural network by assigning certain input and output neurons.

How a Predictive Neural Network Works
A predictive neural network uses its inputs to accept information about current data and uses its outputs to predict future data. It uses two “windows,” a future window and a past window. Both windows must have a window size, which is the amount of data that is either predicted or is needed to predict. To see the two windows in action, consider the following data.

Consider a temporal neural network with a past window size of five and a future window size of two. This neural network would have five input neurons and two output neurons. We would break the above data among these windows to produce training data. The following data shows one such element of training data.

Of course the data above needs to be normalized in some way before it can be fed to the neural network. The above illustration simply shows how the input and output neurons are mapped to the actual data. To get additional data, both windows are simply slid forward. The next element of training data would be as follows.

You would continue sliding the past and future windows forward as you generate more training data. Encog contains specialized classes to prepare data in this format. Simply specify the size of the past, or input, window and the future, or output, window. These specialized classes will be discussed in the next section.

Using the Encog Temporal Dataset
The Encog temporal dataset is contained in the following package org.encog.ml.data.temporal. There are a few classes that make up the Encog temporal dataset. These classes are as follows (encog 3.3):

The TemporalDataDescription class describes one unit of data that is either used for prediction or output. The TemporalError class is an exception that is thrown if there is an error while processing the temporal data. The TemporalMLDataSet class operates just like any Encog dataset and allows the temporal data to be used for training. The TemporalPoint class represents one point of temporal data.

To begin using a TemporalMLDataSet we must instantiate it as follows:
1. TemporalMLDataSet result = new TemporalMLDataSet([past window size], [future window size]);
The above instantiation specifies both the size of the past and future windows. You must also define one or more TemporalDataDescription objects. These define the individual items inside of the past and future windows. One single TemporalDataDescription object can function as both a past and a future window element as illustrated in the code below.
1. TemporalDataDescription desc = new TemporalDataDescription([calculation type], [use for past], [use for future])
To specify that a TemporalDataDescription object functions as both a past and future element, use the value true for the last two parameters. There are several calculation types that you can specify for each data description. These types are summarized here.

The RAW type specifies that the data points should be passed on to the neural network unmodified. The PERCENT CHANGE specifies that each point should be passed on as a percentage change. The DELTA CHANGE specifies that each point should be passed on as the actual change between the two values. If you are normalizing the data yourself, you would use the RAW type. Otherwise, it is very likely you would use the PERCENT CHANGE type.

Next, provide the raw data to train the temporal network from. To do this, create TemporalPoint objects and add them to the temporal dataset. Each TemporalPoint object can contain multiple values, i.e. have the same number of values in each temporal data point as in the TemporalDataDescription objects. The following code shows how to define a temporal data point.
1. TemporalPoint point  = new TemporalPoint([number of values])
2. point.setSequence([a sequence number])
3. point.setData(0, [value 1])
4. point.setData(0, [value 2])
Every data point should have a sequence number in order to sort the data points. The setData method calls allow the individual values to be set and should match the specified number of values in the constructor. Finally, call the generate method. This method takes all of the temporal points and creates the training set. After generate has been called, the TemporalMLDataSet object can be use for training.
1. result.generate()
The next section will use a TemporalMLDataSet object to predict sunspots.

Application to Sunspots
In this section we will see how to use Encog to predict sunspots, which are fairly periodic and predictable. A neural network can learn this pattern and predict the number of sunspots with reasonable accuracy. The output from the sunspot prediction program is shown below. Of course, the neural network first begins training and will train until the error rate falls below six percent (PredictSunspot):
Epoch #1 Error:0.5020612138484535
Epoch #2 Error:0.4237816078962016
Epoch #3 Error:0.027495491913444258
...
Epoch #34 Error:0.009390619550865828
Year Actual Predict Closed Loop Predict
1960 0.5723 0.5656 0.5656
1961 0.3267 0.3953 0.3936
...
1977 0.2148 0.1934 0.2045
1978 0.4891 0.3520 0.3336

Once the network has been trained, it tries to predict the number of sunspots between 1960 and 1978. It does this with at least some degree of accuracy. The number displayed is normalized and simply provides an idea of the relative number of sunspots. A larger number indicates more sunspot activity; a lower number indicates less sunspot activity.

There are two prediction numbers given: the regular prediction and the closed-loop prediction. Both prediction types use a past window of 30 and a future window of 1. The regular prediction simply uses the last 30 values from real data. The closed loop starts this way and, as it proceeds, its own predictions become the input as the window slides forward. This usually results in a less accurate prediction because any mistakes the neural network makes are compounded.

We will now examine how this program was implemented. This program can be found at PredictSunspot. As you can see, the program has sunspot data hardcoded near the top of the file. This data was taken from a C-based neural network example program. You can find the original application at the following URL: http://www.neural-networks-at-your-fingertips.com/bpn.html

The older, C-based neural network example was modified to make use of Encog. You will notice that the Encog version is much shorter than the Cbased version. This is because much of what the example did was already implemented in Encog. Further, the Encog version trains the network faster because it makes use of resilient propagation, whereas the C-based example makes use of backpropagation.

This example goes through a two-step process for using the data. First, the raw data is normalized. Then, this normalized data is loaded into a TemporalMLDataSet object for temporal training. The normalizeSunspots method is called to normalize the sunspots. This method is shown below.
1. public void normalizeSunspots(double lo, double hi) {
2.     // (1)
3.     NormalizeArray norm = new NormalizeArray();
4.     norm.setNormalizedHigh(hi);
5.     norm.setNormalizedLow(lo);
6.
7.     // create arrays to hold the normalized sunspots
8.     // (2)
9.     normalizedSunspots = norm.process(SUNSPOTS);
10.     // (3)
11.     closedLoopSunspots = EngineArray.arrayCopy(normalizedSunspots);
12. }
The hi and lo parameters specify the high and low range to which the sunspots should be normalized. This specifies the normalized sunspot range. Normalization was discussed in Chapter 2. For this example, the lo value is 0.1 and the hi value is 0.9. (1). To normalize these arrays, create an instance of the NormalizeArray class. This object will allow you to quickly normalize an array. To use this object, simply set the normalized high and low values, as follows; (2). The array can now be normalized to this range by calling the process method; (3). Now copy the normalized sunspots to the closed loop sunspots. Initially, the closed-loop array starts out the same as the regular prediction. However, its predictions will used to fill this array.

Now that the sunspot data has been normalized, it should be converted to temporal data. This is done by calling the generateTraining method, shown below.
1. public MLDataSet generateTraining() {
2.     // (1)
3.     TemporalMLDataSet result = new TemporalMLDataSet(WINDOW_SIZE, 1);
4.     // (2)
7.     // (3)
8.     for (int year = TRAIN_START; year < TRAIN_END; year++) {
9.         // (4)
10.         TemporalPoint point = new TemporalPoint(1);
11.         point.setSequence(year);
12.         // (5)
13.         point.setData(0this.normalizedSunspots[year]);
15.     }
16.     // (6)
17.     result.generate();
18.     return result;
19. }
(1). This method will return an Encog dataset that can be used for training. First a TemporalMLDataSet is created and past and future window sizes are specified; (2). We will have a single data description. Because the data is already normalized, we will use RAW data. This data description will be used for both input and prediction, as the last two parameters specify. Finally, we add this description to the dataset; (3). It is now necessary to create all of the data points. We will loop between the starting and ending year, which are the years used to train the neural network. Other years will be used to test the neural network’s predictive ability; (4). Each data point will have only one value to predict the sunspots. The sequence is the year, because there is only one sunspot sample per year; (5). The one value we are using is the normalized number of sunspots. This number is both what we use to predict from past values and what we hope to predict in the future; (6). Finally, we generate the training set and return it.

The data is now ready for training. This dataset is trained using resilient propagation. This process is the same as those used many times earlier in this book. Once training is complete, we will attempt to predict sunspots using the application. This is done with the predict method, which is shown here.
1. public void predict(BasicNetwork network) {
2.     // (1)
3.     NumberFormat f = NumberFormat.getNumberInstance();
4.     f.setMaximumFractionDigits(4);
5.     f.setMinimumFractionDigits(4);
6.
7.     // (2)
8.     System.out.println("Year\tActual\tPredict\tClosed Loop Predict");
9.     for (int year = EVALUATE_START; year < EVALUATE_END; year++) {
10.         // (3) calculate based on actual data
11.         MLData input = new BasicMLData(WINDOW_SIZE);
12.         for (int i = 0; i < input.size(); i++) {
13.             input.setData(i, this.normalizedSunspots[(year - WINDOW_SIZE) + i]);
14.         }
15.         // (4)
16.         MLData output = network.compute(input);
17.         double prediction = output.getData(0);
18.         // (5)
19.         this.closedLoopSunspots[year] = prediction;
20.
21.         // (6) calculate "closed loop", based on predicted data
22.         for (int i = 0; i < input.size(); i++) {
23.             input.setData(i, this.closedLoopSunspots[(year - WINDOW_SIZE) + i]);
24.         }
25.         // (7)
26.         output = network.compute(input);
27.         double closedLoopPrediction = output.getData(0);
28.
29.         // (8) display
30.         System.out.println((STARTING_YEAR + year) + "\t"
31.                 + f.format(this.normalizedSunspots[year]) + "\t"
32.                 + f.format(prediction) + "\t"
33.                 + f.format(closedLoopPrediction));
34.
35.     }
36. }
(1). First, we create a NumberFormat object so that the numbers can be properly formatted. We will display four decimal places; (2). We display the heading for the table and begin to loop through the evaluation years; (3). We create input into the neural network based on actual data, which will be the actual prediction. We extract 30 years worth of data for the past window; (4). The neural network is presented with the data and we retrieve the prediction; (5). The prediction is saved to the closed-loop array for use with future predictions; (6). We will now calculate the closed-loop value. The calculation is essentially the same except that the closed-loop data, which is continually modified, is used. Just as before, we use 30 years worth of data; (7). We compute the output; (8). Finally, we display the closed-loop prediction, the regular prediction and the actual value.

This will display a list of all of the sunspot predictions made by Encog. In the next section we will see how Encog can automatically pull current market information and attempt to predict stock market directions.

Using the Encog Market Dataset
Encog also includes a dataset specifically designed for stock market data. This dataset is capable of downloading data from external sources. Currently, the only external source included in Encog is Yahoo Finance. The Encog market dataset is built on top of the temporal dataset and most classes in the Encog market dataset descend directly from corresponding classes in the temporal data set. The following classes make up the Encog Market Dataset package:

The MarketDataDescription class represents one piece of market data that is part of either the past or future window. It descends from the TemporalDataDescription class. It consists primarily of a TickerSymbol object and a MarketDataType enumeration. The ticker symbol specifies the security to include and the MarketDataType specifies what type of data from this security to use. The available data types are listed below.

These are the market data types criteria currently supported by Encog. They are all represented inside of the MarketDataType enumeration.

The MarketMLDataSet class is descended from the TemporalMLDataSet. This is the main class when creating market-based training data for Encog. This class is an Encog dataset and can be trained. If any errors occur, the MarketError exception will be thrown.

The MarketPoint class descends from the TemporalPoint. You will usually not deal with this object directly, as Encog usually downloads market data from Yahoo Finance. The following code shows the general format for using the MarketMLDataSet class. First, create a loader. Currently, the YahooFinanceLoader is the only public loader available for Encog.
Next, we create the market dataset. We pass the loader, as well as the size of the past and future windows.
2. MarketMLDataSet marker = new MarketMLDataSet(loader, path_window_size, future_window_size)
Next create a MarketDataDescription object. To do this, specify the needed ticker symbol and data type. The last two true values at the end specify that this item is used both for past and predictive purposes.
We add this data description to the dataset.
We can add additional descriptions as needed. Next, load the market data and generate the training data.
1. Calendar end = new GregorianCalendar();// end today
2. Calendar begin = (Calendar) end.clone();// begin 30 days ago
3.
4. // Gather training data for the last 2 years, stopping 60 days short of today.
5. // The 60 days will be used to evaluate prediction.
9.
11. market.generate();
As shown in the code, the begin and end dates must be specified. This tells Encog the range from which to generate training data.

Application to the Stock Market
We will now look at an example of applying Encog to stock market prediction. This program attempts to predict the direction of a single stock based on past performance. This is a very simple stock market example and is not meant to offer any sort of investment advice. First, let’s explain how to run this example. There are four distinct modes in which this example can be run, depending on the command line argument that was passed. These arguments are summarized below.
• generate - Download financial data and generate training file.
• train - Train the neural network.
• evaluate - Evaluate the neural network.
• prune - Evaluate try a number of different architectures to determine the best configuration.

To begin the example you should run the main class, which is named MarketPredict. The following sections will show how this example generates data, trains and then evaluates the resulting neural network. This application is located at here. Each of these modes to use this program will now be covered.

Generating Training Data
The first step is to generate the training data. The example is going to download about eight years worth of financial information to train with. It takes some time to download and process this information. The data is downloaded and written to an Encog EG file. The class MarketBuildTraining provides this functionality. All work performed by this class is in the static method named generate. This method is shown below.
1. public static void generate(File dataDir) {
2.     // (1)
4.     // (2)
5.     final MarketMLDataSet market = new MarketMLDataSet(loader, Config.INPUT_WINDOW, Config.PREDICT_WINDOW);
6.     // (3)
7.     final MarketDataDescription desc = new MarketDataDescription( Config.TICKER, MarketDataType.ADJUSTED_CLOSE, truetrue);
9.     // (4)
10.     Calendar end = new GregorianCalendar();// end today
11.     Calendar begin = (Calendar) end.clone();// begin 30 days ago
12.     // Gather training data for the last 2 years, stopping 60 days short of today.
13.     // The 60 days will be used to evaluate prediction.
18.     market.generate();
19.     // (5)
20.     EncogUtility.saveEGB(new File(dataDir, Config.TRAINING_FILE), market);
21.
22.     // (6) create a network
23.     final BasicNetwork network = EncogUtility.simpleFeedForward(
24.             market.getInputSize(), Config.HIDDEN1_COUNT,
25.             Config.HIDDEN2_COUNT, market.getIdealSize(), true);
26.
27.     // (7) save the network and the training
28.     EncogDirectoryPersistence.saveObject(new File(dataDir, Config.NETWORK_FILE), network);
29. }
(1). This method begins by creating a YahooFinanceLoader that will load the requested financial data; (2). A new MarketMLDataSet object is created that will use the loader and a specified size for the past and future windows. By default, the program uses a future window size of one and a past window size of 10. These constants are all defined in the Config class. This is the way to control how the network is structured and trained by changing any of the values in the Config class; (3). The program uses a single market value from which to make predictions. It will use the adjusted closing price of the specified security. The security that the program is trying to predict is specified in the Config class; (4). The market data is now loaded beginning two years ago and ending two months prior to today. The last two months will be used to evaluate the neural network’s performance; (5). We now save the training data to a binary EGB file. It is important to note that TemporalMLDataSet or any of its derived classes will persist raw numeric data, just as a BasicMLDataSet would. Only the generated data will be saved, not the other support objects such as the MarketDataDescription objects; (6). We will create a network to save to an EG file. This network is a simple feedforward neural network that may have one or two hidden layers. The sizes of the hidden layers are specified in the Config class; (7). We now create the EG file and store the network to an EG file. Later phases of the program, such as the training and evaluation phases, will use this file.

Training the Neural Network
Training the neural network is very simple. The network and training data are already created and stored in an EG file. All that the training class needs to do is load both of these resources from the EG file and begin training. The MarketTrain class does this. The static method train performs all of the training. This method is shown here.
1. public static void train(File dataDir) {
2.     // (1)
3.     final File networkFile = new File(dataDir, Config.NETWORK_FILE);
4.     final File trainingFile = new File(dataDir, Config.TRAINING_FILE);
5.
6.     // network file
7.     if (!networkFile.exists()) {
8.         System.out.println("Can't read file: "
9.                 + networkFile.getAbsolutePath());
10.         return;
11.     }
12.     // (2)
13.     BasicNetwork network = (BasicNetwork) EncogDirectoryPersistence
15.
16.     // (3) training file
17.     if (!trainingFile.exists()) {
18.         System.out.println("Can't read file: "
19.                 + trainingFile.getAbsolutePath());
20.         return;
21.     }
22.
23.     final MLDataSet trainingSet = EncogUtility.loadEGB2Memory(trainingFile);
24.
25.     // (4) train the neural network
26.     EncogUtility.trainConsole(network, trainingSet, Config.TRAINING_MINUTES);
27.
28.     System.out.println("Final Error: "
29.             + network.calculateError(trainingSet));
30.     // (5)
31.     System.out.println("Training complete, saving network.");
32.     EncogDirectoryPersistence.saveObject(networkFile, network);
33.     System.out.println("Network saved.");
34.
35.     Encog.getInstance().shutdown();
36.
37. }
(1). The method begins by verifying whether the Encog EG file is present. Training data and the network will be loaded from here; (2). Next, use the EncogDirectoryPersistence object to load the EG file. We will extract a network; (3). Next, load the training file from disk. This network will be used for training; (4). The neural network is now ready to train. We will use EncogUtility training and loop for the number of minutes specified in the Config class. This is the same as creating a training object and using iterations, as was done previously in this book. The trainConsole method is simply a shortcut to run the iterations for a specified number of minutes; (5). Finally, the neural network is saved back to the EG file.

At this point, the neural network is trained. To further train the neural network, run the training again or move on to evaluating the neural network. If you train the same neural network again using resilient propagation, the error rate will initially spike. This is because the resilient propagation algorithm must reestablish proper delta values for training

Incremental Pruning
One challenge with neural networks is determining the optimal architecture for the hidden layers. Should there be one hidden layer or two? How many neurons should be in each of the hidden layers? There are no easy answers to these questions. Generally, it is best to start with a neural network with one hidden layer and double the number of hidden neurons as input neurons. There are some reports that suggest that the second hidden layer has no advantages, although this is often debated. Other reports suggest a second hidden layer can sometimes lead to faster convergence. For more information, see the hidden layer page on the Heaton Research wiki. http://www.heatonresearch.com/wiki/Hidden_Layers

One utility provided by Encog is the incremental pruning class. This class allows you to use a brute force technique to determine an optimal hidden layer configuration. Calling the market example with the prune argument will perform an incremental prune. This will try a number of different hidden layer configurations to attempt to find the best one. This command begins by loading a training set to memory.
1. MLDataSet training = EncogUtility.loadEGB2Memory(file);
Next a pattern is created to specify the type of neural network to be created.
1. FeedForwardPattern pattern = new FeedForwardPattern();
2. pattern.setInputNeurons(training.getInputSize());
3. pattern.setOutputNeurons(training.getIdealSize());
4. pattern.setActivationFunction(new ActivationTANH());
The above code specifies the creation of feedforward neural networks using the hyperbolic tangent activation function. Next, the pruning object is created.
1. PruneIncremental prune = new PruneIncremental(training, pattern, 100110new ConsoleStatusReportable());
The object will perform 100 training iterations, try one weight for each, and have 10 top networks. The object will take the 10 best networks after 100 training iterations. The best of these 10 is chosen to be the network with the smallest number of links.

The user may also specify the number and sizes of the hidden layers to try. Each call to addHiddenLayer specifies the lower and upper bound to try. The first call to addHiddenLayer specifies the range for the first hidden layer. Here we specify to try hidden layer one sizes from 5 to 50. Because the lower point is not zero, we are required to have a first hidden layer.
Next we specify the size for the second hidden layer. Here we are trying hidden layers between 0 and 50 neurons. Because the low point is zero, we will also try neural networks with no second layer.
Now that the object has been setup we are ready to search. Calling the process method will begin the search.
1. prune.process();
Once the search is completed you can call the getBestNetwork to get the best performing network. The following code obtains this network and saves it.
1. File networkFile = new File(dataDir, Config.NETWORK_FILE);
2. EncogDirectoryPersistence.saveObject(networkFile, prune.getBestNetwork());
We now have a neural network saved with a good combination of hidden layers and neurons. The pruning object does not train each network particularly well, as it is trying to search a large number of networks. At this point, you will want to further train this best network.

Evaluating the Neural Network
We are now ready to evaluate the neural network using the trained neural network from the last section and gauge its performance on actual current stock market data. The MarketEvaluate class contains all of the evaluation code. There are two important methods used during the evaluation process. The first is the determineDirection method which tell the direction it will move the next day.
1. enum Direction {
2.     up, down
3. };
4.
5. public static Direction determineDirection(double d) {
6.     if (d < 0)
7.         return Direction.down;
8.     else
9.         return Direction.up;
10. }
We will need some current market data to evaluate against. The grabData method obtains the necessary market data. It makes use of a MarketMLDataSet to obtain some market data. This method is shown here.
1. public static MarketMLDataSet grabData() {
2.     // (1)
4.     MarketMLDataSet result = new MarketMLDataSet(loader, Config.INPUT_WINDOW, Config.PREDICT_WINDOW);
5.     // (2)
8.     // (3)
9.     Calendar end = new GregorianCalendar();// end today
10.     Calendar begin = (Calendar) end.clone();// begin 30 days ago
12.     // (4)
14.     result.generate();
15.
16.     return result;
17. }
(1). Just like the training data generation, market data is loaded from a YahooFinanceLoader object; (2). We create exactly the same data description as was used for training: the adjusted close for the specified ticker symbol. Past and future data are also desired. By feeding past data to the neural network, we will see how well the output matches the future data; (3). Choose what date range to evaluate the network. We will grab the last 60 days worth of data; (4). The market data is now loaded and generated by using the load method call.

The resulting data is returned to the calling method. Now that we have covered the support methods, it is time to learn how the actual training occurs. The static method evaluate performs the actual evaluation. This method is shown below.
1. public static void evaluate(File dataDir) {
2.     // (1)
3.     File file = new File(dataDir, Config.NETWORK_FILE);
4.     if (!file.exists()) {
5.         System.out.println("Can't read file: " + file.getAbsolutePath());
6.         return;
7.     }
8.     // (2)
9.     BasicNetwork network = (BasicNetwork) EncogDirectoryPersistence.loadObject(file);
10.     // (3)
11.     MarketMLDataSet data = grabData();
12.     // (4)
13.     DecimalFormat format = new DecimalFormat("#0.0000");
14.     // (5)
15.     int count = 0;
16.     int correct = 0;
17.
18.     // (6)
19.     for (MLDataPair pair : data) {
20.         // (7)
21.         MLData input = pair.getInput();
22.         MLData actualData = pair.getIdeal();
23.         MLData predictData = network.compute(input);
24.         // (8)
25.         double actual = actualData.getData(0);
26.         double predict = predictData.getData(0);
27.         double diff = Math.abs(predict - actual);
28.         // (9)
29.         Direction actualDirection = determineDirection(actual);
30.         Direction predictDirection = determineDirection(predict);
31.         // (10)
32.         if (actualDirection == predictDirection)
33.             correct++;
34.
35.         count++;
36.         // (11)
37.         System.out.println("Day " + count + ":actual="
38.                 + format.format(actual) + "(" + actualDirection + ")"
39.                 + ",predict=" + format.format(predict) + "("
40.                 + predictDirection + ")" + ",diff=" + diff);
41.
42.     }
43.     double percent = (double) correct / (double) count;
44.     System.out.println("Direction correct:" + correct + "/" + count);
45.     System.out.println("Directional Accuracy:"
46.             + format.format(percent * 100) + "%");
47.
48. }
(1). First, make sure that the Encog EG file exists; (2). Then, we load the neural network from the EG file. Use the neural network that was trained in the previous section; (3). Load the market data to be used for network evaluation. This is done using the grabData method discussed earlier in this section; (4). Use a formatter to format the percentages; (5). During evaluation, count the number of cases examined and how many were correct; (6). Loop over all of the loaded market data; (7). Retrieve one training pair and obtain the actual data as well as what was predicted. The predicted data is determined by running the network using the compute method; (8). Now retrieve the actual and predicted data and calculate the difference. This establishes the accuracy off the neural network predicting the actual price change; (9). Also calculate the direction the network predicted security takes versus the direction the security actually took; (10). If the direction was correct, increment the correct count by one. Either way, increment the total count by one; (11). Display the results for each case examined; (12). Finally, display stats on the overall accuracy of the neural network.

The following code snippet shows the output of this application when launched once. Because it uses data preceding the current date, the results will be different when run. These results occur because the program is attempting to predict percent movement on Apple Computer’s stock price.

Here, the program had an accuracy of 60%, which is very good for this simple neural network. Accuracy rates generally range from 30-40% when this program was run at different intervals. This is a very simple stock market predictor and should not be used for any actual investing. It shows how to structure a neural network to predict market direction.

### [ Python 文章收集 ] List Comprehensions and Generator Expressions

Source From  Here   Preface   Do you know the difference between the following syntax?  view plain copy to clipboard print ? [x  for ...