无法使用XgBoost训练模型-PySpark

问题描述 投票:0回答:1

我正在尝试使用如下所示的Spark DataFrame训练XgBoost模型:

+--------------------+-------------------+
|            features|         TARGET_VAL|
+--------------------+-------------------+
|(122,[0,1,9,10,11...|                0.0|
|(122,[0,1,8,9,11,...| 14.577420000000002|
|[4.0,1.0,0.0,0.0,...|           65.44524|
|(122,[0,1,8,9,11,...|                0.0|
|(122,[0,1,8,9,10,...|           18.27017|
|(122,[0,1,8,11,12...|                0.0|
|(122,[0,1,8,10,11...|           75.75954|
|(122,[0,1,10,11,1...|           65.32013|
|[1.0,0.0,1.0,0.0,...|          171.16563|
|(122,[0,1,8,11,12...|                0.0|
|(122,[0,1,8,9,11,...|                0.0|
|(122,[0,1,8,10,11...|            2.27041|
|(122,[0,1,11,12,2...|                0.0|
|[4.0,1.0,0.0,0.0,...|           76.08024|
|(122,[0,1,8,9,11,...|                0.0|
|(122,[0,1,8,10,11...|           15.31895|
|(122,[0,1,8,10,11...|          122.56702|
|(122,[0,1,8,10,11...|-30.268179999999997|
|(122,[0,1,8,10,11...|                0.0|
|(122,[0,1,10,11,4...|          136.80025|
+--------------------+-------------------+

[我正在使用sparkxgb(带有PySpark的XgBoost),并且正在像这样训练模型:

paramMap = {'eta': 0.1, 'subsample': 0.8}

xgbClassifier = XGBoostClassifier(**paramMap) \
    .setFeaturesCol("features") \
    .setLabelCol("TARGET_VAL")

[当我使用:训练模型时

xgboostModel = xgbClassifier.fit(df)

我收到以下错误:

java.lang.IllegalArgumentException: requirement failed: Classifier found max label value = 23470.00821 but requires integers in range [0, ... 2147483647)

因此,我将TARGET_VAL列转换为int并在执行此操作时收到以下错误:

java.lang.IllegalArgumentException: requirement failed: Classifier inferred 23471 from label values in column XGBoostClassifier_37d67e9f2233__labelCol, but this exceeded the max numClasses (100) allowed to be inferred from values.  To avoid this error for labels with > 100 classes, specify numClasses explicitly in the metadata; this can be done by applying StringIndexer to the label column.

我是XgBoost和机器学习的新手。我认为TARGET_VAL是训练后的模型将为测试数据集预测的列,并且应该是浮点值。那么,我在做什么错呢?我需要使用不同的参数配置模型吗?

apache-spark machine-learning pyspark data-science xgboost
1个回答
0
投票

这里的问题是,由于TARGET_VAL是连续变量列,并且XGBoostClassifier需要离散/类别变量列。分类器的类太多了。正如您在错误中看到的,最大numClasses为100,我确定您有100个以上的数字。

您正在使用分类算法来解决回归问题。

Continuous vs Discrete Variables - Wiki

© www.soinside.com 2019 - 2024. All rights reserved.