Spark 列字符串在其他列(行)中出现时替换

Spark column string replace when present in other column (row)

我想从 col1 中删除存在于 col2 中的字符串:

val df = spark.createDataFrame(Seq(
("Hi I heard about Spark", "Spark"),
("I wish Java could use case classes", "Java"),
("Logistic regression models are neat", "models")
)).toDF("sentence", "label")

使用 regexp_replacetranslate 参考:spark functions api

val res = df.withColumn("sentence_without_label", regexp_replace 
(col("sentence") , "(?????)", "" ))

这样 res 如下所示:

您可以简单地使用 regexp_replace

df5.withColumn("sentence_without_label", regexp_replace($"sentence" , lit($"label"), lit("" )))

或者您可以使用如下简单的 udf 函数

val df5 = spark.createDataFrame(Seq(
  ("Hi I heard about Spark", "Spark"),
  ("I wish Java could use case classes", "Java"),
  ("Logistic regression models are neat", "models")
)).toDF("sentence", "label")

val replace = udf((data: String , rep : String)=>data.replaceAll(rep, ""))

val res = df5.withColumn("sentence_without_label", replace($"sentence" , $"label"))

res.show()

输出:

+-----------------------------------+------+------------------------------+
|sentence                           |label |sentence_without_label        |
+-----------------------------------+------+------------------------------+
|Hi I heard about Spark             |Spark |Hi I heard about              |
|I wish Java could use case classes |Java  |I wish  could use case classes|
|Logistic regression models are neat|models|Logistic regression  are neat |
+-----------------------------------+------+------------------------------+

如果label它只是一个字面量就很简单了:

import org.apache.spark.sql.functions._

df.withColumn("sentence_without_label", 
  regexp_replace(col("sentence"), col("label"), lit(""))).show(false)

+-----------------------------------+------+------------------------------+
|sentence                           |label |sentence_without_label        |
+-----------------------------------+------+------------------------------+
|Hi I heard about Spark             |Spark |Hi I heard about              |
|I wish Java could use case classes |Java  |I wish  could use case classes|
|Logistic regression models are neat|models|Logistic regression  are neat |
+-----------------------------------+------+------------------------------+  

在 Spark 1.6 中,您可以使用 expr:

执行相同的操作
df.withColumn(
  "sentence_without_label",
  expr("regexp_replace(sentence, label, '')"))