Table 5

LSTM hyperparameter setup during training

ItemsValueRemarks
Number of neurons 20 Using 1 LSTM layer 
Learning rate 0.01 Using constant learning rate 
Max epoch 100 Interaction using early stopping 
Batch size 32 – 
Solver ‘Adam’ Reference (Kingma & Ba 2014) 
Activation functions Relu and linear Rectified linear unit (Relu) at LSTM and linear at output (dense) layer 
Dropout 0.2 Applying LSTM and dense layers 
ItemsValueRemarks
Number of neurons 20 Using 1 LSTM layer 
Learning rate 0.01 Using constant learning rate 
Max epoch 100 Interaction using early stopping 
Batch size 32 – 
Solver ‘Adam’ Reference (Kingma & Ba 2014) 
Activation functions Relu and linear Rectified linear unit (Relu) at LSTM and linear at output (dense) layer 
Dropout 0.2 Applying LSTM and dense layers 
Close Modal

or Create an Account

Close Modal
Close Modal