i/p col 特征必须是字符串或数字类型,但得到 org.apache.spark.ml.linalg.VectorUDT

The i/p col features must be either string or numeric type, but got org.apache.spark.ml.linalg.VectorUDT

我是 Spark 机器学习的新手,只是一个 3 天的新手,我基本上是在尝试通过 Java 在 spark 中使用逻辑回归算法来预测一些数据。我参考了几个网站和文档并提出了代码,我正在尝试执行它但遇到了问题。 所以我对数据进行了预处理,并使用矢量汇编器将所有相关列合并为一个,我正在尝试拟合模型并面临问题。

public class Sparkdemo {

static SparkSession session = SparkSession.builder().appName("spark_demo")
        .master("local[*]").getOrCreate();

@SuppressWarnings("empty-statement")
public static void getData() {
    Dataset<Row> inputFile = session.read()
            .option("header", true)
            .format("csv")
            .option("inferschema", true)
            .csv("C:\Users\WildJasmine\Downloads\NKI_cleaned.csv");
    inputFile.show();
    String[] columns = inputFile.columns();
    int beg = 16, end = columns.length - 1;
    String[] featuresToDrop = new String[end - beg + 1];
    System.arraycopy(columns, beg, featuresToDrop, 0, featuresToDrop.length);
    System.out.println("rows are\n " + Arrays.toString(featuresToDrop));
    Dataset<Row> dataSubset = inputFile.drop(featuresToDrop);
    String[] arr = {"Patient", "ID", "eventdeath"};
    Dataset<Row> X = dataSubset.drop(arr);
    X.show();
    Dataset<Row> y = dataSubset.select("eventdeath");
    y.show();

    //Vector Assembler concept for merging all the cols into a single col
    VectorAssembler assembler = new VectorAssembler()
            .setInputCols(X.columns())
            .setOutputCol("features");

    Dataset<Row> dataset = assembler.transform(X);
    dataset.show();

    StringIndexer labelSplit = new StringIndexer().setInputCol("features").setOutputCol("label");
    Dataset<Row> data = labelSplit.fit(dataset)
            .transform(dataset);
    data.show();

    Dataset<Row>[] splitsX = data.randomSplit(new double[]{0.8, 0.2}, 42);
    Dataset<Row> trainingX = splitsX[0];
    Dataset<Row> testX = splitsX[1];

    LogisticRegression lr = new LogisticRegression()
            .setMaxIter(10)
            .setRegParam(0.3)
            .setElasticNetParam(0.8);

    LogisticRegressionModel lrModel = lr.fit(trainingX);
    Dataset<Row> prediction = lrModel.transform(testX);
    prediction.show();

}

public static void main(String[] args) {
    getData();

}}

下图是我的数据集,

dataset

错误信息:

Exception in thread "main" java.lang.IllegalArgumentException: requirement failed: The input column features must be either string type or numeric type, but got org.apache.spark.ml.linalg.VectorUDT@3bfc3ba7.
at scala.Predef$.require(Predef.scala:224)
at org.apache.spark.ml.feature.StringIndexerBase$class.validateAndTransformSchema(StringIndexer.scala:86)
at org.apache.spark.ml.feature.StringIndexer.validateAndTransformSchema(StringIndexer.scala:109)
at org.apache.spark.ml.feature.StringIndexer.transformSchema(StringIndexer.scala:152)
at org.apache.spark.ml.PipelineStage.transformSchema(Pipeline.scala:74)
at org.apache.spark.ml.feature.StringIndexer.fit(StringIndexer.scala:135)

我的最终结果是我需要使用特征列的预测值。

提前致谢。

当您要为其应用 StringIndexer 转换的数据框的输入字段是向量时,会发生该错误。在 Spark 文档 https://spark.apache.org/docs/latest/ml-features#stringindexer 中,您可以看到输入列是一个字符串。此转换器对该列执行不同操作,并创建一个新列,其中包含与每个不同字符串值对应的整数。它不适用于矢量。