如何在Spark中获取spark.ml NaiveBayes概率向量不是[0-1]类?

问题描述 投票:0回答:2

我正在研究 NaiveBayes 分类器,我可以使用训练后的模型预测单个数据点的值,但我想获得概率值。

数据仅分为两类。并且预测函数返回

0
1

import org.apache.log4j.{Level, Logger}
import org.apache.spark.ml.classification.{NaiveBayes, NaiveBayesModel}
import org.apache.spark.ml.feature.LabeledPoint
import org.apache.spark.ml.linalg.Vectors
import org.apache.spark.sql.SparkSession

object Test {
  def main(args: Array[String]): Unit = {
    Logger.getLogger("org").setLevel(Level.OFF)
    Logger.getLogger("akka").setLevel(Level.OFF)
    val spark = SparkSession.builder.appName("Test").master("local[4]").getOrCreate
    val dataset = spark.read.option("inferSchema", "true").csv("data/labelled.csv").toDF()

    import spark.sqlContext.implicits._
    val output = dataset.map(row => {
      LabeledPoint(row.getInt(2), Vectors.dense( row.getInt(0) , row.getInt(1)))
    })
    val Array(training, test) =  output.randomSplit(Array(0.7, 0.3),seed = 11L)
    training.cache()

    val model : NaiveBayesModel = new NaiveBayes().fit(training)
    val speed = 110
    val hour  = 11
    val label1 : Double =  model.predict(Vectors.dense(speed,hour))
    // UPDATE
    val label = model.predictProbability(Vectors.dense(speed,hour)) // This not work and raise error[1]
  }
}

[1] 使用

model.predictProbability

时出现的错误

错误:类中的(24, 23)方法predictProbability 无法访问 ProbabilisticClassificationModel org.apache.spark.ml.classification.NaiveBayesModel 访问 受保护的方法 PredictProbability 不允许,因为包含 对象 Test 不是类的子类 包裹分类中的 ProbabilisticClassificationModel 其中 目标已确定 val label = model.predictProbability(Vectors.dense(速度,小时))

scala apache-spark machine-learning naivebayes apache-spark-ml
2个回答
0
投票

经过多次研究,我没有在

spark.ml
库中找到此功能,但我可以使用
spark.mllib
做到这一点,并且代码应修改为

import org.apache.log4j.{Level, Logger}
// Import NaiveBayes, NaiveBayesModel from mlib 
import org.apache.spark.mllib.classification.{NaiveBayes, NaiveBayesModel}
// Import LabeledPoint, Vectors from mlib to create dataset
import org.apache.spark.mllib.regression.LabeledPoint
import org.apache.spark.mllib.linalg.Vectors
import org.apache.spark.sql.SparkSession

object Test {
  def main(args: Array[String]): Unit = {
    Logger.getLogger("org").setLevel(Level.OFF)
    Logger.getLogger("akka").setLevel(Level.OFF)
    val spark = SparkSession.builder.appName("Test").master("local[4]").getOrCreate
    val dataset = spark.read.option("inferSchema","true").csv("data/labelled.csv").toDF()

    import spark.sqlContext.implicits._
    // using mllib.regression.LabeledPoint & mllib.linalg.Vectors then transform DF to JavaRDD
    val output = dataset.map(row => {
      LabeledPoint(row.getInt(2), Vectors.dense( row.getInt(0) , row.getInt(1)))
    }).toJavaRDD
    
    val Array(training, test) =  output.randomSplit(Array(0.7, 0.3),seed = 11L)
    training.cache()
    //Using Run instead of fit method
    val model : NaiveBayesModel = new NaiveBayes().run(training)
    val speed = 110
    val hour  = 11
    // return predict value
    val label1 : Double =  model.predict(Vectors.dense(speed,hour))
    // return array of predict Probabilities `each class Probability`
    val testLabel = model.predictProbabilities(Vectors.dense(speed,hour))
  }
}

0
投票

使用 Spark 3.5.1(2024 年),我会将

speed
/
hour
对转换为
Dataset
,然后执行以下操作:

testDataset = ... // define the test dataset with the speed and hour
testDataset = model.transform(testDataset)

testDataset
将包含以下附加列:

  • rawPrediction
    :一组 NaiveBayes 值(每个类别一个)
  • probability
    :概率数组(每个类别一个);概率之和为 1。
  • prediction
    :预测类别
最新问题
© www.soinside.com 2019 - 2025. All rights reserved.