LSTM hyperparameter setup during training
Items . | Value . | Remarks . |
---|---|---|
Number of neurons | 20 | Using 1 LSTM layer |
Learning rate | 0.01 | Using constant learning rate |
Max epoch | 100 | Interaction using early stopping |
Batch size | 32 | – |
Solver | ‘Adam’ | Reference (Kingma & Ba 2014) |
Activation functions | Relu and linear | Rectified linear unit (Relu) at LSTM and linear at output (dense) layer |
Dropout | 0.2 | Applying LSTM and dense layers |
Items . | Value . | Remarks . |
---|---|---|
Number of neurons | 20 | Using 1 LSTM layer |
Learning rate | 0.01 | Using constant learning rate |
Max epoch | 100 | Interaction using early stopping |
Batch size | 32 | – |
Solver | ‘Adam’ | Reference (Kingma & Ba 2014) |
Activation functions | Relu and linear | Rectified linear unit (Relu) at LSTM and linear at output (dense) layer |
Dropout | 0.2 | Applying LSTM and dense layers |