我正在从事欺诈分析项目,因此需要一些帮助。以前,我使用SAS Enterprise Miner来了解有关增强/集成技术的更多信息,并且我了解到增强可以帮助改善模型的性能。
[目前,我的小组已经在Python上完成了以下模型:朴素贝叶斯,随机森林和神经网络我们希望使用XGBoost来改善F1得分。我不确定这是否可行,因为我只遇到过有关如何单独执行XGBoost或朴素贝叶斯的教程。
我正在寻找一个教程,他们将向您展示如何创建Naive Bayes模型,然后使用Boosting。在那之后,我们可以比较指标是否增加,以查看指标是否有所改善。我是机器学习的新手,所以我可能对此概念有误。
我考虑过替换XGBoost中的值,但不确定要更改哪个值,或者甚至不能以这种方式工作。
朴素贝叶斯
from sklearn.model_selection import train_test_split
X_train, X_test, y_train, y_test = train_test_split(X_sm,y_sm, test_size = 0.2, random_state=0)
from sklearn.naive_bayes import GaussianNB
from sklearn.metrics import confusion_matrix, confusion_matrix, accuracy_score, f1_score, precision_score, recall_score
nb = GaussianNB()
nb.fit(X_train, y_train)
nb_pred = nb.predict(X_test)
XGBoost
from sklearn.model_selection import train_test_split
import xgboost as xgb
from xgboost import XGBClassifier
from sklearn.model_selection import train_test_split
X_train, X_test, y_train, y_test = train_test_split(X_sm,y_sm, test_size = 0.2, random_state=0)
model = XGBClassifier(base_score=0.5, booster='gbtree', colsample_bylevel=1,
colsample_bynode=1, colsample_bytree=0.9, gamma=0,
learning_rate=0.1, max_delta_step=0, max_depth=10,
min_child_weight=1, missing=None, n_estimators=500, n_jobs=-1,
nthread=None, objective='binary:logistic', random_state=0,
reg_alpha=0, reg_lambda=1, scale_pos_weight=1, seed=None,
silent=None, subsample=0.9, verbosity=0)
model.fit(X_train, y_train)
y_pred = model.predict(X_test)
predictions = [round(value) for value in y_pred]
在理论