In the current work, one of the
BPNN methods is adopted for estimating the
OTE20. The data mining method in the present study is a multilayer
BPNN. The
BPNN is composed of multiple layers, and each layer has many neurons. The layers among them are interlinked with weighted coefficients. Generally, three kinds of layers occur in the
BPNN model; the initial (first) layer signifies inputs while the middle or second (hidden) layer for computing input weights, and the last (third) layer is the output layer. The development of the
BPNN involves three stages; the preparation of data for training is the first stage, the second stage involves various permutations and combinations of optimal network architectures, and the final third stage is testing. The number of hidden layers and neurons are selected by trial and error, and the best network topology is supposed to be that which gives very close to the desired results, i.e., after computing training error, the error is fed back to the input layer. The weighted connection (Tiwari 2006) of the input constituents is signified as

is the output variable
,
is the input variable and
y is the number of nodes (neurons) that link to the
xth node.

characterizes bias, and

shows a weighted coefficient. The
BPNN modeling is executed through open
WEKA software. The optimal topology of the
BPNN model is shown in
Figure 8, and its value of optimum tuning parameters is shown in Table 4.
Table 4The optimal value of tuning parameters of BPNN
BPNN topology
. | Number of hidden Layers
. | Momentum
. | Learning rate
. | Iteration
. |
---|
4-9-1 | 1 | 0.2 | 0.3 | 1,500 |
BPNN topology
. | Number of hidden Layers
. | Momentum
. | Learning rate
. | Iteration
. |
---|
4-9-1 | 1 | 0.2 | 0.3 | 1,500 |
Figure 8
Optimal topology of the BPNN.