Spark 对象的类型参数边界很难获得
Type parameter bounds with Spark objects are hard to get
我是 Scala
的初学者。
我正在尝试创建一个接受 ProbabilisticClassifier
作为输入并产生 CrossValidator
模型作为输出的对象:
import org.apache.spark.ml.classification.{ProbabilisticClassifier, ProbabilisticClassificationModel}
import org.apache.spark.ml.evaluation.BinaryClassificationEvaluator
import org.apache.spark.ml.param.ParamMap
import org.apache.spark.ml.tuning.{CrossValidator, ParamGridBuilder}
import constants.Const
object MyModels {
def loadOrCreateModel[A, M, T](
model: ProbabilisticClassifier[Vector[T], A, M],
paramGrid: Array[ParamMap]): CrossValidator = {
// Binary evaluator.
val binEvaluator = (
new BinaryClassificationEvaluator()
.setLabelCol("yCol")
)
// Cross validator.
val cvModel = (
new CrossValidator()
.setEstimator(model)
.setEvaluator(binEvaluator)
.setEstimatorParamMaps(paramGrid)
.setNumFolds(3)
)
cvModel
}
}
但这给了我:
sbt package
[info] Loading project definition from somepath/project
[info] Loading settings from build.sbt ...
[info] Set current project to xxx (in build file:somepath/)
[info] Compiling 1 Scala source to somepath/target/scala-2.11/classes ...
[error] somepath/src/main/scala/models.scala:11:12: type arguments [Vector[T],A,M] do not conform to class ProbabilisticClassifier's type parameter bounds [FeaturesType,E <: org.apache.spark.ml.classification.ProbabilisticClassifier[FeaturesType,E,M],M <: org.apache.spark.ml.classification.ProbabilisticClassificationModel[FeaturesType,M]]
[error] model: ProbabilisticClassifier[Vector[T], A, M],
[error] ^
[error] one error found
[error] (Compile / compileIncremental) Compilation failed
[error] Total time: 3 s, completed Mar 31, 2018 4:22:31 PM
makefile:127: recipe for target 'target/scala-2.11/classes/models/XModels.class' failed
make: *** [target/scala-2.11/classes/models/XModels.class] Error 1
我尝试了 [A, M, T]
参数的几种组合以及方法参数中的不同类型。
我们的想法是能够将 LogisticRegression
或 RandomForestClassifier
提供给此函数。来自文档:
class LogisticRegression extends ProbabilisticClassifier[Vector, LogisticRegression, LogisticRegressionModel] with LogisticRegressionParams with DefaultParamsWritable with Logging
class RandomForestClassifier extends ProbabilisticClassifier[Vector, RandomForestClassifier, RandomForestClassificationModel] with RandomForestClassifierParams with DefaultParamsWritable
有人可以指出我在哪里可以学习实现此方法所需的资源吗?
我正在使用 Spark
2.1.0.
编辑 01
谢谢@Andrey Tyukin,
很抱歉代码无法重现。它实际上是一个字符串。您的代码确实有效,但也许我表达错了:
<console>:35: error: type mismatch;
found : org.apache.spark.ml.classification.LogisticRegression
required: org.apache.spark.ml.classification.ProbabilisticClassifier[Vector[?],?,?]
val cvModel = models.TalkingDataModels.loadOrCreateModel(logistic_regressor, paramGrid)
所以也许我的想法从一开始就是错误的。 是否可以创建一个同时接受 LogisticRegression
或 RandomForestClassifier
对象的方法?
将代码编辑为 MCVE:
import org.apache.spark.ml.classification.{ProbabilisticClassifier, ProbabilisticClassificationModel}
import org.apache.spark.ml.evaluation.BinaryClassificationEvaluator
import org.apache.spark.ml.param.ParamMap
import org.apache.spark.ml.tuning.{CrossValidator, ParamGridBuilder}
import org.apache.spark.ml.classification.LogisticRegression
object MyModels {
def main(array: Array[String]): Unit = {
val logisticRegressor = (
new LogisticRegression()
.setFeaturesCol("yCol")
.setLabelCol("labels")
.setMaxIter(10)
)
val paramGrid = (
new ParamGridBuilder()
.addGrid(logisticRegressor.regParam, Array(0.01, 0.1, 1))
.build()
)
loadOrCreateModel(logisticRegressor, paramGrid)
println()
}
def loadOrCreateModel[
F,
M <: ProbabilisticClassificationModel[Vector[F], M],
P <: ProbabilisticClassifier[Vector[F], P, M]
](
probClassif: ProbabilisticClassifier[Vector[F], P, M],
paramGrid: Array[ParamMap]
): CrossValidator = {
// Binary evaluator.
val binEvaluator =
new BinaryClassificationEvaluator()
.setLabelCol("y")
// Cross validator.
val cvModel =
new CrossValidator()
.setEstimator(probClassif)
.setEvaluator(binEvaluator)
.setEstimatorParamMaps(paramGrid)
.setNumFolds(3)
cvModel
}
}
这里可以编译,但我不得不扔掉你的 constants.Const.yColumn
-string,并将其替换为魔法值 "y"
:
import org.apache.spark.ml.classification.{ProbabilisticClassifier, ProbabilisticClassificationModel}
import org.apache.spark.ml.evaluation.BinaryClassificationEvaluator
import org.apache.spark.ml.param.ParamMap
import org.apache.spark.ml.tuning.{CrossValidator, ParamGridBuilder}
object CrossValidationExample {
def loadOrCreateModel[
F,
M <: ProbabilisticClassificationModel[Vector[F], M],
P <: ProbabilisticClassifier[Vector[F], P, M]
](
probClassif: ProbabilisticClassifier[Vector[F], P, M],
paramGrid: Array[ParamMap]
): CrossValidator = {
// Binary evaluator.
val binEvaluator =
new BinaryClassificationEvaluator()
.setLabelCol("y")
// Cross validator.
val cvModel =
new CrossValidator()
.setEstimator(probClassif)
.setEvaluator(binEvaluator)
.setEstimatorParamMaps(paramGrid)
.setNumFolds(3)
cvModel
}
}
在定义通用参数列表之前,在脑海中进行拓扑排序可能有助于理解哪些参数依赖于哪些其他参数。
这里,模型取决于特征的类型,概率分类器既取决于特征的类型又取决于模型的类型。因此,按 features、model、classifier 的顺序声明参数可能更有意义.然后你必须得到正确的 F-bounded 多态性。
啊,顺便说一句,egyptian-brackets 风格的缩进是恕我直言,唯一明智的方法是缩进多个参数列表,其类型参数的长度大约为 50 英里(不幸的是,您无法更改长度在类型参数中,它们在我见过的每个机器学习库中往往都非常冗长。
编辑(第二个答案MCVE-part)
这是一个很好的 straight-forward 概括。如果它想要 linalg.Vector
而不是 Vector[Feature]
,那么也只对其进行抽象:
import org.apache.spark.ml.classification.{ProbabilisticClassifier, ProbabilisticClassificationModel}
import org.apache.spark.ml.evaluation.BinaryClassificationEvaluator
import org.apache.spark.ml.param.ParamMap
import org.apache.spark.ml.tuning.{CrossValidator, ParamGridBuilder}
import org.apache.spark.ml.classification.{LogisticRegression, LogisticRegressionModel}
import org.apache.spark.ml.classification.RandomForestClassifier
import org.apache.spark.ml.linalg.{Vector => LinalgVector}
object CrossValidationExample {
def main(array: Array[String]): Unit = {
val logisticRegressor = (
new LogisticRegression()
.setFeaturesCol("yCol")
.setLabelCol("labels")
.setMaxIter(10)
)
val paramGrid = (
new ParamGridBuilder()
.addGrid(logisticRegressor.regParam, Array(0.01, 0.1, 1))
.build()
)
loadOrCreateModel(logisticRegressor, paramGrid)
val rfc: RandomForestClassifier = ???
loadOrCreateModel(rfc, paramGrid)
}
def loadOrCreateModel[
FeatVec,
M <: ProbabilisticClassificationModel[FeatVec, M],
P <: ProbabilisticClassifier[FeatVec, P, M]
](
probClassif: ProbabilisticClassifier[FeatVec, P, M],
paramGrid: Array[ParamMap]
): CrossValidator = {
// Binary evaluator.
val binEvaluator =
new BinaryClassificationEvaluator()
.setLabelCol("y")
// Cross validator.
val cvModel =
new CrossValidator()
.setEstimator(probClassif)
.setEvaluator(binEvaluator)
.setEstimatorParamMaps(paramGrid)
.setNumFolds(3)
cvModel
}
}
我是 Scala
的初学者。
我正在尝试创建一个接受 ProbabilisticClassifier
作为输入并产生 CrossValidator
模型作为输出的对象:
import org.apache.spark.ml.classification.{ProbabilisticClassifier, ProbabilisticClassificationModel}
import org.apache.spark.ml.evaluation.BinaryClassificationEvaluator
import org.apache.spark.ml.param.ParamMap
import org.apache.spark.ml.tuning.{CrossValidator, ParamGridBuilder}
import constants.Const
object MyModels {
def loadOrCreateModel[A, M, T](
model: ProbabilisticClassifier[Vector[T], A, M],
paramGrid: Array[ParamMap]): CrossValidator = {
// Binary evaluator.
val binEvaluator = (
new BinaryClassificationEvaluator()
.setLabelCol("yCol")
)
// Cross validator.
val cvModel = (
new CrossValidator()
.setEstimator(model)
.setEvaluator(binEvaluator)
.setEstimatorParamMaps(paramGrid)
.setNumFolds(3)
)
cvModel
}
}
但这给了我:
sbt package
[info] Loading project definition from somepath/project
[info] Loading settings from build.sbt ...
[info] Set current project to xxx (in build file:somepath/)
[info] Compiling 1 Scala source to somepath/target/scala-2.11/classes ...
[error] somepath/src/main/scala/models.scala:11:12: type arguments [Vector[T],A,M] do not conform to class ProbabilisticClassifier's type parameter bounds [FeaturesType,E <: org.apache.spark.ml.classification.ProbabilisticClassifier[FeaturesType,E,M],M <: org.apache.spark.ml.classification.ProbabilisticClassificationModel[FeaturesType,M]]
[error] model: ProbabilisticClassifier[Vector[T], A, M],
[error] ^
[error] one error found
[error] (Compile / compileIncremental) Compilation failed
[error] Total time: 3 s, completed Mar 31, 2018 4:22:31 PM
makefile:127: recipe for target 'target/scala-2.11/classes/models/XModels.class' failed
make: *** [target/scala-2.11/classes/models/XModels.class] Error 1
我尝试了 [A, M, T]
参数的几种组合以及方法参数中的不同类型。
我们的想法是能够将 LogisticRegression
或 RandomForestClassifier
提供给此函数。来自文档:
class LogisticRegression extends ProbabilisticClassifier[Vector, LogisticRegression, LogisticRegressionModel] with LogisticRegressionParams with DefaultParamsWritable with Logging
class RandomForestClassifier extends ProbabilisticClassifier[Vector, RandomForestClassifier, RandomForestClassificationModel] with RandomForestClassifierParams with DefaultParamsWritable
有人可以指出我在哪里可以学习实现此方法所需的资源吗?
我正在使用 Spark
2.1.0.
编辑 01
谢谢@Andrey Tyukin,
很抱歉代码无法重现。它实际上是一个字符串。您的代码确实有效,但也许我表达错了:
<console>:35: error: type mismatch;
found : org.apache.spark.ml.classification.LogisticRegression
required: org.apache.spark.ml.classification.ProbabilisticClassifier[Vector[?],?,?]
val cvModel = models.TalkingDataModels.loadOrCreateModel(logistic_regressor, paramGrid)
所以也许我的想法从一开始就是错误的。 是否可以创建一个同时接受 LogisticRegression
或 RandomForestClassifier
对象的方法?
将代码编辑为 MCVE:
import org.apache.spark.ml.classification.{ProbabilisticClassifier, ProbabilisticClassificationModel} import org.apache.spark.ml.evaluation.BinaryClassificationEvaluator import org.apache.spark.ml.param.ParamMap import org.apache.spark.ml.tuning.{CrossValidator, ParamGridBuilder} import org.apache.spark.ml.classification.LogisticRegression object MyModels { def main(array: Array[String]): Unit = { val logisticRegressor = ( new LogisticRegression() .setFeaturesCol("yCol") .setLabelCol("labels") .setMaxIter(10) ) val paramGrid = ( new ParamGridBuilder() .addGrid(logisticRegressor.regParam, Array(0.01, 0.1, 1)) .build() ) loadOrCreateModel(logisticRegressor, paramGrid) println() } def loadOrCreateModel[ F, M <: ProbabilisticClassificationModel[Vector[F], M], P <: ProbabilisticClassifier[Vector[F], P, M] ]( probClassif: ProbabilisticClassifier[Vector[F], P, M], paramGrid: Array[ParamMap] ): CrossValidator = { // Binary evaluator. val binEvaluator = new BinaryClassificationEvaluator() .setLabelCol("y") // Cross validator. val cvModel = new CrossValidator() .setEstimator(probClassif) .setEvaluator(binEvaluator) .setEstimatorParamMaps(paramGrid) .setNumFolds(3) cvModel } }
这里可以编译,但我不得不扔掉你的 constants.Const.yColumn
-string,并将其替换为魔法值 "y"
:
import org.apache.spark.ml.classification.{ProbabilisticClassifier, ProbabilisticClassificationModel}
import org.apache.spark.ml.evaluation.BinaryClassificationEvaluator
import org.apache.spark.ml.param.ParamMap
import org.apache.spark.ml.tuning.{CrossValidator, ParamGridBuilder}
object CrossValidationExample {
def loadOrCreateModel[
F,
M <: ProbabilisticClassificationModel[Vector[F], M],
P <: ProbabilisticClassifier[Vector[F], P, M]
](
probClassif: ProbabilisticClassifier[Vector[F], P, M],
paramGrid: Array[ParamMap]
): CrossValidator = {
// Binary evaluator.
val binEvaluator =
new BinaryClassificationEvaluator()
.setLabelCol("y")
// Cross validator.
val cvModel =
new CrossValidator()
.setEstimator(probClassif)
.setEvaluator(binEvaluator)
.setEstimatorParamMaps(paramGrid)
.setNumFolds(3)
cvModel
}
}
在定义通用参数列表之前,在脑海中进行拓扑排序可能有助于理解哪些参数依赖于哪些其他参数。
这里,模型取决于特征的类型,概率分类器既取决于特征的类型又取决于模型的类型。因此,按 features、model、classifier 的顺序声明参数可能更有意义.然后你必须得到正确的 F-bounded 多态性。
啊,顺便说一句,egyptian-brackets 风格的缩进是恕我直言,唯一明智的方法是缩进多个参数列表,其类型参数的长度大约为 50 英里(不幸的是,您无法更改长度在类型参数中,它们在我见过的每个机器学习库中往往都非常冗长。
编辑(第二个答案MCVE-part)
这是一个很好的 straight-forward 概括。如果它想要 linalg.Vector
而不是 Vector[Feature]
,那么也只对其进行抽象:
import org.apache.spark.ml.classification.{ProbabilisticClassifier, ProbabilisticClassificationModel}
import org.apache.spark.ml.evaluation.BinaryClassificationEvaluator
import org.apache.spark.ml.param.ParamMap
import org.apache.spark.ml.tuning.{CrossValidator, ParamGridBuilder}
import org.apache.spark.ml.classification.{LogisticRegression, LogisticRegressionModel}
import org.apache.spark.ml.classification.RandomForestClassifier
import org.apache.spark.ml.linalg.{Vector => LinalgVector}
object CrossValidationExample {
def main(array: Array[String]): Unit = {
val logisticRegressor = (
new LogisticRegression()
.setFeaturesCol("yCol")
.setLabelCol("labels")
.setMaxIter(10)
)
val paramGrid = (
new ParamGridBuilder()
.addGrid(logisticRegressor.regParam, Array(0.01, 0.1, 1))
.build()
)
loadOrCreateModel(logisticRegressor, paramGrid)
val rfc: RandomForestClassifier = ???
loadOrCreateModel(rfc, paramGrid)
}
def loadOrCreateModel[
FeatVec,
M <: ProbabilisticClassificationModel[FeatVec, M],
P <: ProbabilisticClassifier[FeatVec, P, M]
](
probClassif: ProbabilisticClassifier[FeatVec, P, M],
paramGrid: Array[ParamMap]
): CrossValidator = {
// Binary evaluator.
val binEvaluator =
new BinaryClassificationEvaluator()
.setLabelCol("y")
// Cross validator.
val cvModel =
new CrossValidator()
.setEstimator(probClassif)
.setEvaluator(binEvaluator)
.setEstimatorParamMaps(paramGrid)
.setNumFolds(3)
cvModel
}
}