WNN mechanism is significantly influenced by the parameters described in Table 2. Here also, all the parameters are considered important and treated as hyperparameters. These are the number of hidden layers (P_{W1}), the number of wavelons in the first layer (P_{W2}), the number of wavelons in the second layer (P_{W3}), activation function (P_{W4}), dropout (P_{W5}), and learning rate (P_{W6}), respectively. The parameters P_{W1}, P_{W2}, P_{W3}, P_{W4}, and P_{W6} control the network weights while the input data is navigated via hidden layers to generate the desired outcome. P_{W5} is an intentionally dropped wavelons parameter from the neural network to improve the processing and reduce the overall computation time. Table 2 presents the parameter ranges with remarks on four wavelets employed (columns 3–7).

Table 2

Index (1) . | Parameters (2) . | Range (Optimal value) for Gaussian (3) . | Range (Optimal value) for Mexican hat (4) . | Range (Optimal value) for Morlet (5) . | Range (Optimal value) for Shannon (6) . | Remarks (7) . |
---|---|---|---|---|---|---|

P_{W1} | Number of hidden layers | 0–3 (2) | 0–3 (1) | 0–3 (2) | 0–3 (0) | The number of hidden layers is associated with an input and output layer, allowing wavelons to take input and generate output. Overfitting takes place due to an increase in hidden layers. Although fewer hidden layers reduce computational work, it may result in underfitting. |

P_{W2} | Number of wavelons in the first layer | 32–512 (256) | 32–512 (128) | 32–512 (128) | NA | The number of wavelons refers to the hidden units in the first hidden layer that would navigate the updated weighted inputs through the corresponding hidden layers. A large number of wavelons increases the accuracy of the model. |

P_{W3} | Number of wavelons in the second layer | 32–512 (32) | NA | 32–512 (32) | NA | The number of wavelons refers to the hidden units in the second hidden layer that would navigate the updated weighted inputs through the first hidden layer resulting in an output. A large number of wavelons increases the accuracy of the model. |

P_{W4} | Activation function | tanh, relu (tanh) | tanh, relu (relu) | tanh, relu (relu) | tanh, relu (tanh) | This function controls the activation and deactivation of wavelons as they move from the input to the output layer. The suitable activation function is chosen based on the type of problem and the mother wavelet. |

P_{W5} | Dropout | 0–0.5 (0.25) | 0–0.5 (0.25) | 0–0.5 (0) | 0–0.5 (0.25) | Dropout is a parameter that allows specific wavelons to be selectively removed from a neural network to optimize processing and reduce computing time. If the dropout rate exceeds a specific threshold, it shows that the model is unsuitable for the provided input datasets. Higher dropout indicates that the given datasets are noisy. |

P_{W6} | Learning rate | 0–1 (8.68 × 10^{−4}) | 0–1 (10^{−3}) | 0–1 (2.04 × 10^{−3}) | 0–1 (5.92 × 10^{−4}) | Learning rate determines the step size where weights are updated to achieve the smallest loss function. Higher learning speeds up convergence, but it may result in the loss of intricate patterns from information available, leading to underfitting. |

Index (1) . | Parameters (2) . | Range (Optimal value) for Gaussian (3) . | Range (Optimal value) for Mexican hat (4) . | Range (Optimal value) for Morlet (5) . | Range (Optimal value) for Shannon (6) . | Remarks (7) . |
---|---|---|---|---|---|---|

P_{W1} | Number of hidden layers | 0–3 (2) | 0–3 (1) | 0–3 (2) | 0–3 (0) | The number of hidden layers is associated with an input and output layer, allowing wavelons to take input and generate output. Overfitting takes place due to an increase in hidden layers. Although fewer hidden layers reduce computational work, it may result in underfitting. |

P_{W2} | Number of wavelons in the first layer | 32–512 (256) | 32–512 (128) | 32–512 (128) | NA | The number of wavelons refers to the hidden units in the first hidden layer that would navigate the updated weighted inputs through the corresponding hidden layers. A large number of wavelons increases the accuracy of the model. |

P_{W3} | Number of wavelons in the second layer | 32–512 (32) | NA | 32–512 (32) | NA | The number of wavelons refers to the hidden units in the second hidden layer that would navigate the updated weighted inputs through the first hidden layer resulting in an output. A large number of wavelons increases the accuracy of the model. |

P_{W4} | Activation function | tanh, relu (tanh) | tanh, relu (relu) | tanh, relu (relu) | tanh, relu (tanh) | This function controls the activation and deactivation of wavelons as they move from the input to the output layer. The suitable activation function is chosen based on the type of problem and the mother wavelet. |

P_{W5} | Dropout | 0–0.5 (0.25) | 0–0.5 (0.25) | 0–0.5 (0) | 0–0.5 (0.25) | Dropout is a parameter that allows specific wavelons to be selectively removed from a neural network to optimize processing and reduce computing time. If the dropout rate exceeds a specific threshold, it shows that the model is unsuitable for the provided input datasets. Higher dropout indicates that the given datasets are noisy. |

P_{W6} | Learning rate | 0–1 (8.68 × 10^{−4}) | 0–1 (10^{−3}) | 0–1 (2.04 × 10^{−3}) | 0–1 (5.92 × 10^{−4}) | Learning rate determines the step size where weights are updated to achieve the smallest loss function. Higher learning speeds up convergence, but it may result in the loss of intricate patterns from information available, leading to underfitting. |

This site uses cookies. By continuing to use our website, you are agreeing to our privacy policy.