Spark 结构化流:JDBC 接收器中的主键
Spark Structured streaming: primary key in JDBC sink
我正在使用带更新模式的结构化流从 kafka 主题读取数据流,然后进行一些转换。
然后我创建了一个 jdbc sink 以使用 Append 模式将数据推送到 mysql sink 中。问题是我如何告诉我的接收器让它知道这是我的主键并根据它进行更新,以便我的 table 不应该有任何重复的行。
val df: DataFrame = spark
.readStream
.format("kafka")
.option("kafka.bootstrap.servers", "<List-here>")
.option("subscribe", "emp-topic")
.load()
import spark.implicits._
// value in kafka is bytes so cast it to String
val empList: Dataset[Employee] = df.
selectExpr("CAST(value AS STRING)")
.map(row => Employee(row.getString(0)))
// window aggregations on 1 min windows
val aggregatedDf= ......
// How to tell here that id is my primary key and do the update
// based on id column
aggregatedDf
.writeStream
.trigger(Trigger.ProcessingTime(60.seconds))
.outputMode(OutputMode.Update)
.foreachBatch { (batchDF: DataFrame, batchId: Long) =>
batchDF
.select("id", "name","salary","dept")
.write.format("jdbc")
.option("url", "jdbc:mysql://localhost/empDb")
.option("driver","com.mysql.cj.jdbc.Driver")
.option("dbtable", "empDf")
.option("user", "root")
.option("password", "root")
.mode(SaveMode.Append)
.save()
}
一种方法是,您可以使用 ON DUPLICATE KEY UPDATE
和 foreachPartition
可以达到这个目的
下面是伪代码片段
/**
* Insert in to database using foreach partition.
* @param dataframe : DataFrame
* @param sqlDatabaseConnectionString
* @param sqlTableName
*/
def insertToTable(dataframe: DataFrame, sqlDatabaseConnectionString: String, sqlTableName: String): Unit = {
//numPartitions = number of simultaneous DB connections you can planning to give
datframe.repartition(numofpartitionsyouwant)
val tableHeader: String = dataFrame.columns.mkString(",")
dataFrame.foreachPartition { partition =>
// Note : Each partition one connection (more better way is to use connection pools)
val sqlExecutorConnection: Connection = DriverManager.getConnection(sqlDatabaseConnectionString)
//Batch size of 1000 is used since some databases cant use batch size more than 1000 for ex : Azure sql
partition.grouped(1000).foreach {
group =>
val insertString: scala.collection.mutable.StringBuilder = new scala.collection.mutable.StringBuilder()
group.foreach {
record => insertString.append("('" + record.mkString(",") + "'),")
}
val sql = s"""
| INSERT INTO $sqlTableName VALUES
| $tableHeader
| ${insertString}
| ON DUPLICATE KEY UPDATE
| yourprimarykeycolumn='${record.getAs[String]("key")}'
sqlExecutorConnection.createStatement()
.executeUpdate(sql)
}
sqlExecutorConnection.close() // close the connection
}
}
您可以使用准备语句代替 jdbc 语句。
进一步阅读:
我正在使用带更新模式的结构化流从 kafka 主题读取数据流,然后进行一些转换。
然后我创建了一个 jdbc sink 以使用 Append 模式将数据推送到 mysql sink 中。问题是我如何告诉我的接收器让它知道这是我的主键并根据它进行更新,以便我的 table 不应该有任何重复的行。
val df: DataFrame = spark
.readStream
.format("kafka")
.option("kafka.bootstrap.servers", "<List-here>")
.option("subscribe", "emp-topic")
.load()
import spark.implicits._
// value in kafka is bytes so cast it to String
val empList: Dataset[Employee] = df.
selectExpr("CAST(value AS STRING)")
.map(row => Employee(row.getString(0)))
// window aggregations on 1 min windows
val aggregatedDf= ......
// How to tell here that id is my primary key and do the update
// based on id column
aggregatedDf
.writeStream
.trigger(Trigger.ProcessingTime(60.seconds))
.outputMode(OutputMode.Update)
.foreachBatch { (batchDF: DataFrame, batchId: Long) =>
batchDF
.select("id", "name","salary","dept")
.write.format("jdbc")
.option("url", "jdbc:mysql://localhost/empDb")
.option("driver","com.mysql.cj.jdbc.Driver")
.option("dbtable", "empDf")
.option("user", "root")
.option("password", "root")
.mode(SaveMode.Append)
.save()
}
一种方法是,您可以使用 ON DUPLICATE KEY UPDATE
和 foreachPartition
可以达到这个目的
下面是伪代码片段
/**
* Insert in to database using foreach partition.
* @param dataframe : DataFrame
* @param sqlDatabaseConnectionString
* @param sqlTableName
*/
def insertToTable(dataframe: DataFrame, sqlDatabaseConnectionString: String, sqlTableName: String): Unit = {
//numPartitions = number of simultaneous DB connections you can planning to give
datframe.repartition(numofpartitionsyouwant)
val tableHeader: String = dataFrame.columns.mkString(",")
dataFrame.foreachPartition { partition =>
// Note : Each partition one connection (more better way is to use connection pools)
val sqlExecutorConnection: Connection = DriverManager.getConnection(sqlDatabaseConnectionString)
//Batch size of 1000 is used since some databases cant use batch size more than 1000 for ex : Azure sql
partition.grouped(1000).foreach {
group =>
val insertString: scala.collection.mutable.StringBuilder = new scala.collection.mutable.StringBuilder()
group.foreach {
record => insertString.append("('" + record.mkString(",") + "'),")
}
val sql = s"""
| INSERT INTO $sqlTableName VALUES
| $tableHeader
| ${insertString}
| ON DUPLICATE KEY UPDATE
| yourprimarykeycolumn='${record.getAs[String]("key")}'
sqlExecutorConnection.createStatement()
.executeUpdate(sql)
}
sqlExecutorConnection.close() // close the connection
}
}
您可以使用准备语句代替 jdbc 语句。
进一步阅读: