根据条件在 spark 数据集中添加列值

Add column value in the spark dataset on the basis of a condition

public class EmployeeBean implements Serializable {

    private Long id;

    private String name;

    private Long salary;

    private Integer age;

    // getters and setters

}

相关火花代码:

SparkSession spark = SparkSession.builder().master("local[2]").appName("play-with-spark").getOrCreate();
List<EmployeeBean> employees1 = populateEmployees(1, 10);

Dataset<EmployeeBean> ds1 = spark.createDataset(employees1, Encoders.bean(EmployeeBean.class));
ds1.show();
ds1.printSchema();

Dataset<Row> ds2 = ds1.where("age is null").withColumn("is_age_null", lit(true));
Dataset<Row> ds3 = ds1.where("age is not null").withColumn("is_age_null", lit(false));

Dataset<Row> ds4 = ds2.union(ds3);
ds4.show();

相关输出:

ds1

+----+---+----+------+
| age| id|name|salary|
+----+---+----+------+
|null|  1|dev1| 11000|
|   2|  2|dev2| 12000|
|null|  3|dev3| 13000|
|   4|  4|dev4| 14000|
|null|  5|dev5| 15000|
+----+---+----+------+

ds4

+----+---+----+------+-----------+
| age| id|name|salary|is_age_null|
+----+---+----+------+-----------+
|null|  1|dev1| 11000|       true|
|null|  3|dev3| 13000|       true|
|null|  5|dev5| 15000|       true|
|   2|  2|dev2| 12000|      false|
|   4|  4|dev4| 14000|      false|
+----+---+----+------+-----------+

有没有比创建两个数据集并执行并集更好的解决方案来将此列添加到数据集中?

同样可以使用 when otherwisewithColumn 来完成。

ds1.withColumn("is_age_null" , when(col("age") === "null", lit(true)).otherWise(lit(false))).show()

这将给出与 ds4 相同的结果。