如何在spark中合并两列数据集

How to combine two columns of dataset in spark

我有一个这样的 spark 数据集:

> df.show()
+------+------+
| No1  | No2  |
+------+------+
| 001  | null |
| 002  | 002  |
| 003  | 004  |
| null | 005  |
+------+------+

我想获得一个新列 No3,其中包含来自列 No1No2[=23= 的值],条件是复制No1如果有值,否则如果为null,使用No2

中的值
+------+------+------+
| No1  | No2  | No3  |
+------+------+------+
| 001  | null | 001  |
| 002  | 002  | 002  |
| 003  | 004  | 003  |
| null | 005  | 005  |
+------+------+------+

我该怎么做?

您可以检查 No1 列是否为 null。如果它的 null 然后取值 No2

import org.apache.spark.sql.functions._

val data = spark.sparkContext.parallelize(Seq(
  ("001", null),
  ("002", "002"),
  ("003", "004"),
  (null, "005")
)).toDF("No1", "No2")

val resultDf = data.withColumn("No3", when($"No1".isNull, $"No2").otherwise($"No1"))

resultDf.show

输出:

+----+----+---+
|No1 |No2 |No3|
+----+----+---+
|001 |null|001|
|002 |002 |002|
|003 |004 |003|
|null|005 |005|
+----+----+---+

希望对您有所帮助!

我想你要找的是 coalesce

import org.apache.spark.sql.functions._

val data = spark.sparkContext.parallelize(Seq(
  ("001", null),
  ("002", "002"),
  ("003", "004"),
  (null, "005")
)).toDF("No1", "No2")

val resultDf = data.withColumn("No3", coalesce($"No1", $"No2"))

resultDf.show