在训练XGBoost模型时如何使用GPU?

问题描述 投票:0回答:1

我一直在尝试在Jupyter Notebook中训练XGBoost模型。我通过以下命令安装了XGboost(GPU):

git clone — recursive https://github.com/dmlc/xgboost
cd xgboost
mkdir build
cd build
cmake .. -DUSE_CUDA=ON
make -j

但是只要我尝试训练模型,但model.fit,内核将在几分钟后重新启动。代码:

params = { 'max_depth': 50, 'n_estimators':80, 'learning_rate':0.1, 'colsample_bytree':7, 'gamma':0, 'reg_alpha':4, 'objective':'binary:logistic', 'eta':0.3, 'silent':1, 'subsample':0.8, 'tree_method':'gpu_hist', 'predictor':'gpu_predictor',}
xgb_model = xgb.XGBClassifier(**params).fit(X_train, y_train) 
xgb_prediction = xgb_model.predict(X_valid)

其中X_train和y_train是从sklearn TfidfVectorizer派生的>

我已经安装了cuda,cat /usr/local/cuda/version.txt给出:CUDA Version 10.2.89

我一直在尝试在Jupyter Notebook中训练XGBoost模型。我通过以下命令安装了XGboost(GPU):git clone —递归https://github.com/dmlc/xgboost cd xgboost mkdir build cd ...

python gpu xgboost
1个回答
0
投票

尝试使用param['updater'] = 'grow_gpu'作为XGBClassifier的另一个参数。此处详细阅读:https://xgboost.ai/2016/12/14/GPU-accelerated-xgboost.html

© www.soinside.com 2019 - 2024. All rights reserved.