Skip to Main Content
Hyperparameter optimization was performed to improve the prediction performance of the models. Here, three RF hyperparameters, i.e., n_estimators, min_samples_split, and min_samples_leaf, were optimized, and seven XGB hyperparameters, i.e., n_estimators, max_depth, min_child_weight, learning_rate, Gamma, subsample, and colsample_bytree, were optimized (Table 3). In this study, hyperparameter tuning was performed using the tree-structured Parzen estimator (TPE), which is a Bayesian optimization technique. The TPE searches the hyperparameter set with the largest expected imposition (EI) value sequentially based on the results of the previous iteration as follows.
formula
(1)
Table 3

Hyperparameter search space for RF and XGB

ModelParameterRange
RF n_estimators {100,500} 
 min_samples_split {2,6} 
 min_samples_leaf {1,6} 
XGB n_estimators {100,350} 
 max_depth {3,8} 
 min_child_weight {1,10} 
 learning_rate {0.01,0.08} 
 Gamma {0.1,3} 
 Subsample {0.5,1} 
 colsample_bytree {0.6,0.9} 
ModelParameterRange
RF n_estimators {100,500} 
 min_samples_split {2,6} 
 min_samples_leaf {1,6} 
XGB n_estimators {100,350} 
 max_depth {3,8} 
 min_child_weight {1,10} 
 learning_rate {0.01,0.08} 
 Gamma {0.1,3} 
 Subsample {0.5,1} 
 colsample_bytree {0.6,0.9} 
Close Modal

or Create an Account

Close Modal
Close Modal