Skip to Main Content

WNN mechanism is significantly influenced by the parameters described in Table 2. Here also, all the parameters are considered important and treated as hyperparameters. These are the number of hidden layers (PW1), the number of wavelons in the first layer (PW2), the number of wavelons in the second layer (PW3), activation function (PW4), dropout (PW5), and learning rate (PW6), respectively. The parameters PW1, PW2, PW3, PW4, and PW6 control the network weights while the input data is navigated via hidden layers to generate the desired outcome. PW5 is an intentionally dropped wavelons parameter from the neural network to improve the processing and reduce the overall computation time. Table 2 presents the parameter ranges with remarks on four wavelets employed (columns 3–7).

Table 2

Hyperparameters of WNN

Index (1)Parameters (2)Range (Optimal value) for Gaussian (3)Range (Optimal value) for Mexican hat (4)Range (Optimal value) for Morlet (5)Range (Optimal value) for Shannon (6)Remarks (7)
PW1 Number of hidden layers 0–3 (2) 0–3 (1) 0–3 (2) 0–3 (0) The number of hidden layers is associated with an input and output layer, allowing wavelons to take input and generate output.
Overfitting takes place due to an increase in hidden layers. Although fewer hidden layers reduce computational work, it may result in underfitting. 
PW2 Number of wavelons in the first layer 32–512 (256) 32–512 (128) 32–512 (128) NA The number of wavelons refers to the hidden units in the first hidden layer that would navigate the updated weighted inputs through the corresponding hidden layers. A large number of wavelons increases the accuracy of the model. 
PW3 Number of wavelons in the second layer 32–512 (32) NA 32–512 (32) NA The number of wavelons refers to the hidden units in the second hidden layer that would navigate the updated weighted inputs through the first hidden layer resulting in an output. A large number of wavelons increases the accuracy of the model. 
PW4 Activation function tanh, relu (tanh) tanh, relu (relu) tanh, relu (relu) tanh, relu (tanh) This function controls the activation and deactivation of wavelons as they move from the input to the output layer. The suitable activation function is chosen based on the type of problem and the mother wavelet. 
PW5 Dropout 0–0.5 (0.25) 0–0.5 (0.25) 0–0.5 (0) 0–0.5 (0.25) Dropout is a parameter that allows specific wavelons to be selectively removed from a neural network to optimize processing and reduce computing time. If the dropout rate exceeds a specific threshold, it shows that the model is unsuitable for the provided input datasets. Higher dropout indicates that the given datasets are noisy. 
PW6 Learning rate 0–1 (8.68 × 10−40–1 (10−30–1 (2.04 × 10−30–1 (5.92 × 10−4Learning rate determines the step size where weights are updated to achieve the smallest loss function. Higher learning speeds up convergence, but it may result in the loss of intricate patterns from information available, leading to underfitting. 
Index (1)Parameters (2)Range (Optimal value) for Gaussian (3)Range (Optimal value) for Mexican hat (4)Range (Optimal value) for Morlet (5)Range (Optimal value) for Shannon (6)Remarks (7)
PW1 Number of hidden layers 0–3 (2) 0–3 (1) 0–3 (2) 0–3 (0) The number of hidden layers is associated with an input and output layer, allowing wavelons to take input and generate output.
Overfitting takes place due to an increase in hidden layers. Although fewer hidden layers reduce computational work, it may result in underfitting. 
PW2 Number of wavelons in the first layer 32–512 (256) 32–512 (128) 32–512 (128) NA The number of wavelons refers to the hidden units in the first hidden layer that would navigate the updated weighted inputs through the corresponding hidden layers. A large number of wavelons increases the accuracy of the model. 
PW3 Number of wavelons in the second layer 32–512 (32) NA 32–512 (32) NA The number of wavelons refers to the hidden units in the second hidden layer that would navigate the updated weighted inputs through the first hidden layer resulting in an output. A large number of wavelons increases the accuracy of the model. 
PW4 Activation function tanh, relu (tanh) tanh, relu (relu) tanh, relu (relu) tanh, relu (tanh) This function controls the activation and deactivation of wavelons as they move from the input to the output layer. The suitable activation function is chosen based on the type of problem and the mother wavelet. 
PW5 Dropout 0–0.5 (0.25) 0–0.5 (0.25) 0–0.5 (0) 0–0.5 (0.25) Dropout is a parameter that allows specific wavelons to be selectively removed from a neural network to optimize processing and reduce computing time. If the dropout rate exceeds a specific threshold, it shows that the model is unsuitable for the provided input datasets. Higher dropout indicates that the given datasets are noisy. 
PW6 Learning rate 0–1 (8.68 × 10−40–1 (10−30–1 (2.04 × 10−30–1 (5.92 × 10−4Learning rate determines the step size where weights are updated to achieve the smallest loss function. Higher learning speeds up convergence, but it may result in the loss of intricate patterns from information available, leading to underfitting. 

Close Modal

or Create an Account

Close Modal
Close Modal