使用 Hyperopt(贝叶斯超参数优化)进行超参数优化可在定义的搜索空间之外产生超参数

问题描述 投票:0回答:1

我使用 hyperopt 为 XGBoostClassifier 实现了超参数优化。 因此,我定义了一定的搜索空间,例如

"n_estimators": hp.choice("n_estimators",np.arange(32, 264, 8, dtype=int))

但是,hyperopt 建议使用 18 个“n_estimators”,这超出了定义的搜索空间。

这是可能发生的“正常”预期行为吗?如果是,为什么?否则我认为我错误地定义了搜索空间。

我感谢任何帮助或解释。

编辑 1:可重现的示例:

from sklearn.metrics import precision_score, f1_score, accuracy_score, recall_score, average_precision_score, roc_auc_score
import xgboost as xgb
from hyperopt import STATUS_OK, Trials, fmin, hp, tpe

# Search space
space={
        "n_estimators": hp.choice("n_estimators",np.arange(32, 264, 8, dtype=int)),                   # tune 32 - 256
        "eta":hp.uniform("eta",0.01,0.9),                                   # learning rate # tune 0.01 - 0.9
        "gamma":hp.uniform("gamma",0.01,0.9),                               # tune 0 - 0.9
        "max_depth":hp.choice("max_depth", np.arange(6, 18, 1, dtype=int)),                       # tune 6 - 18
        "min_child_weight":hp.quniform('min_child_weight', 0, 10, 1),       # tune 0 - 10
        "subsample":hp.uniform("subsample",0.5,1),                          # tune 0.5 - 1
        "colsample_bytree":hp.uniform("colsample_bytree",0,1),              # tune 0- 1
        "colsample_bylevel":hp.uniform("colsample_bylevel",0,1),             # tune 0- 1
        "colsample_bynode":hp.uniform("colsample_bynode",0,1),              # tune 0- 1
        "scale_pos_weight": 1,                  # tune by class imbalance: (sum(negative instances) / sum(positive instances)                  
    }

def objective(space):
        params={     
                # General parameters
                "booster":'gbtree',
                "nthread":16,
                
                # Booster parameters
                "n_estimators":space["n_estimators"],           # tune 32 - 256
                "eta":space["eta"],                               # learning rate # tune 0.01 - 0.9
                "gamma":space["gamma"],                           # tune 0 - 0.9
                "max_depth":space["max_depth"],                   # tune 6 - 18
                "min_child_weight":space["min_child_weight"],     # tune 0 - 10
                "subsample":space["subsample"],                   # tune 0.5 - 1
                "colsample_bytree":space["colsample_bytree"],     # tune 0- 1
                "colsample_bylevel":space["colsample_bylevel"],   # tune 0- 1
                "colsample_bynode":space["colsample_bynode"],     # tune 0- 1
                "scale_pos_weight":space["scale_pos_weight"],     # tune by class imbalance: (sum(negative instances) / sum(positive instances))
                
                # Learning task parameters
                "objective":"multi:softmax", # multi:softprob
                "num_class":2,
                #eval_metric="auc", # default metric will be assigned according to objective, logloss for classification
                "seed":42,
                }
        
        clf=xgb.XGBClassifier(**params)                      
                        
        evaluation = [( X_valid, y_valid)]
        
        clf.fit(X_train, y_train, eval_set=evaluation,
                verbose=False)
        
        preds = clf.predict_proba(X_valid)

        predicted_classes = preds.argmax(axis=1) # extract the class with the highest probability

        f1 = f1_score(y_valid, predicted_classes)
        acc = accuracy_score(y_valid, predicted_classes)
        recall = recall_score(y_valid, predicted_classes)
        precision = precision_score(y_valid, predicted_classes)
        average_precision = average_precision_score(y_valid, predicted_classes)
        roc_auc = roc_auc_score(y_valid, predicted_classes)

        return {'loss': -f1, 'status': STATUS_OK, 'f1': f1, 'acc': acc, 'recall': recall, 'precision': precision, 'average_precision': average_precision, 'roc_auc': roc_auc}

trials = Trials()

best_hyperparams = fmin(fn = objective,
                        space = space,
                        algo = tpe.suggest,
                        max_evals = 10,
                        trials = trials,
                        rstate = np.random.default_rng(42))

print("The best hyperparameters are : ")
print(best_hyperparams)
print(trials.best_trial["result"])
optimization xgboost hyperparameters hyperopt
1个回答
0
投票

显然,

fmin
超参数的
choice
的输出是选项列表中超参数的indexhttps://github.com/hyperopt/hyperopt/issues/284

如果您查看代码末尾的

trials.__dict__
,您会发现
n_estimators
的所有值都是 0 到 28 之间的整数,即选项列表的长度,但实际列表中没有。

© www.soinside.com 2019 - 2024. All rights reserved.