f1score 的 Spark mllib 阈值
Spark mllib threshold for f1score
我正在尝试找到使我的逻辑回归具有最高 f1 分数的最佳阈值。但是,当我写下以下几行时:
val f1Score = metrics.fMeasureByThreshold
f1Score.foreach { case (t, f) =>
println(s"Threshold: $t, F-score: $f, Beta = 1")
出现了一些奇怪的值,例如:
Threshold: 2.0939996826644833, F-score: 0.285648784961027, Beta = 1
Threshold: 2.093727854652065, F-score: 0.28604171441668574, Beta = 1
Threshold: 2.0904571465313113, F-score: 0.2864344637946838, Beta = 1
Threshold: 2.0884466833553468, F-score: 0.28682703321878583, Beta = 1
Threshold: 2.0882666552407283, F-score: 0.2872194228126431, Beta = 1
Threshold: 2.0835997800203447, F-score: 0.2876116326997939, Beta = 1
Threshold: 2.077892816382506, F-score: 0.28800366300366304, Beta = 1
阈值怎么可能大于一?对于在控制台输出中进一步显示的负值也是如此。
我之前在将我的 Dataframe 转换为 RDD 时犯了一个错误,而不是这样写:
val predictionAndLabels =predictions.select("probability", "labelIndex").rdd.map(x => (x(0).asInstanceOf[DenseVector](1), x(1).asInstanceOf[Double]))
我写道:
val predictionAndLabels =predictions.select("rawPredictions", "labelIndex").rdd.map(x => (x(0).asInstanceOf[DenseVector](1), x(1).asInstanceOf[Double]))
所以阈值是关于原始预测而不是概率,现在一切都有意义了
我正在尝试找到使我的逻辑回归具有最高 f1 分数的最佳阈值。但是,当我写下以下几行时:
val f1Score = metrics.fMeasureByThreshold
f1Score.foreach { case (t, f) =>
println(s"Threshold: $t, F-score: $f, Beta = 1")
出现了一些奇怪的值,例如:
Threshold: 2.0939996826644833, F-score: 0.285648784961027, Beta = 1
Threshold: 2.093727854652065, F-score: 0.28604171441668574, Beta = 1
Threshold: 2.0904571465313113, F-score: 0.2864344637946838, Beta = 1
Threshold: 2.0884466833553468, F-score: 0.28682703321878583, Beta = 1
Threshold: 2.0882666552407283, F-score: 0.2872194228126431, Beta = 1
Threshold: 2.0835997800203447, F-score: 0.2876116326997939, Beta = 1
Threshold: 2.077892816382506, F-score: 0.28800366300366304, Beta = 1
阈值怎么可能大于一?对于在控制台输出中进一步显示的负值也是如此。
我之前在将我的 Dataframe 转换为 RDD 时犯了一个错误,而不是这样写:
val predictionAndLabels =predictions.select("probability", "labelIndex").rdd.map(x => (x(0).asInstanceOf[DenseVector](1), x(1).asInstanceOf[Double]))
我写道:
val predictionAndLabels =predictions.select("rawPredictions", "labelIndex").rdd.map(x => (x(0).asInstanceOf[DenseVector](1), x(1).asInstanceOf[Double]))
所以阈值是关于原始预测而不是概率,现在一切都有意义了