Spark Delta 合并将源列值添加到目标列值
Spark Delta merge add Source column value to Target column value
我希望目标列中的更新值是源值 + 目标值的总和
示例:
%scala
import org.apache.spark.sql.functions._
import io.delta.tables._
// Create example delta table
val dept = Seq(("Finance",10), ("Marketing",20),("Sales",30), ("IT",40) )
val deptColumns = Seq("dept_name","dept_emp_count")
val deptDF = dept.toDF(deptColumns:_*)
deptDF.write.format("delta").mode("overwrite").saveAsTable("dept_table")
//create example stage dataframee
val staged_df = spark.sql("select * from dept_table").withColumn("dept_emp_count", lit(1))
//How to do this merge?
DeltaTable.forName(spark, "dept_table").as("events")
.merge(staged_df.as("updates"), "events.dept_name = updates.dept_name")
.whenMatched()
.updateExpr(Map(
"dept_emp_count" -> lit("events.dept_emp_count") + lit("updates.dept_emp_count"))) // How do I write this line?
.execute()
该更新映射中的值是 SQL 表达式,因此您只需要编写 "events.dept_emp_count + updates.dept_emp_count"
而不是 lit("events.dept_emp_count") + lit("updates.dept_emp_count")
我希望目标列中的更新值是源值 + 目标值的总和
示例:
%scala
import org.apache.spark.sql.functions._
import io.delta.tables._
// Create example delta table
val dept = Seq(("Finance",10), ("Marketing",20),("Sales",30), ("IT",40) )
val deptColumns = Seq("dept_name","dept_emp_count")
val deptDF = dept.toDF(deptColumns:_*)
deptDF.write.format("delta").mode("overwrite").saveAsTable("dept_table")
//create example stage dataframee
val staged_df = spark.sql("select * from dept_table").withColumn("dept_emp_count", lit(1))
//How to do this merge?
DeltaTable.forName(spark, "dept_table").as("events")
.merge(staged_df.as("updates"), "events.dept_name = updates.dept_name")
.whenMatched()
.updateExpr(Map(
"dept_emp_count" -> lit("events.dept_emp_count") + lit("updates.dept_emp_count"))) // How do I write this line?
.execute()
该更新映射中的值是 SQL 表达式,因此您只需要编写 "events.dept_emp_count + updates.dept_emp_count"
lit("events.dept_emp_count") + lit("updates.dept_emp_count")