PySpark:调用 df.foreach 方法时出现 PicklingError

PySpark: PicklingError on calling df.foreach method

我有一个从 csv 读取的代码 (kafka_producer.py) >> 创建 Pandas 数据帧 >> 将 pandas 数据帧转换为 spark 数据帧 >> 在 spark-dataframe 上调用 foreach 方法将 post 消息发送给 kafka。 df.foreachPartition(self.send_to_kafka) 正在抛出 PicklingError: Could not serialize object: TypeError: can't pickle _thread.RLock objects
代码如下:

def get_kafka_producer(topic):
    kafkaBrokers='kafka.broker:9093'
    caRootLocation='/path/to/CARoot.pem'
    certLocation='/path/to/certificate.pem'
    keyLocation='/path/to/key.pem'
    password='abc123'
    
    producer = KafkaProducer(bootstrap_servers=kafkaBrokers,
                              security_protocol='SSL',
                              ssl_check_hostname=False,
                              ssl_cafile=caRootLocation,
                              ssl_certfile=certLocation,
                              ssl_keyfile=keyLocation,
                              ssl_password=password)
    return producer


class SparkKafkaWriter:
    topic = None
    producer = None
    def __init__(self,topic):
        self.topic = topic
    
    def send_to_kafka(self,rows):
        print("Writing Data")
        for row in rows:
            json_str = json.dumps(row)
            self.producer.send(self.topic, key=None, value=bytes(json_str,'utf-8'))
            self.producer.flush()
    
    def post_spark_to_kafka(self,df):
        producer = get_kafka_producer()
        self.producer = producer
        df.foreachPartition(self.send_to_kafka)
        print("Dataframe Posted")

    
def run_kafka_producer(path,sep,topic):
    df = pd.read_csv(path,sep=sep)
    if isinstance(df, pd.DataFrame):
        print("Converting Pandas DF to Spark DF")
        spark = get_spark_session("session_name")
        df = spark.createDataFrame(df)
    
    writer = SparkKafkaWriter(topic)
    writer.post_spark_to_kafka(df)


if __name__ == __main__:
    path = "/path/to/data.csv"
    sep = "|"
    topic = "TEST_TOPIC"
    run_kafka_producer(path,sep,topic)

错误是:

File "/path/to/kafka_producer.py", line 45, in post_spark_to_kafka
    df.foreachPartition(self.send_to_kafka)
  File "/opt/cloudera/parcels/IMMUTA/python/pyspark/sql/dataframe.py", line 596, in foreachPartition
    self.rdd.foreachPartition(f)
  File "/opt/cloudera/parcels/IMMUTA/python/pyspark/rdd.py", line 806, in foreachPartition
    self.mapPartitions(func).count()  # Force evaluation
  File "/opt/cloudera/parcels/IMMUTA/python/pyspark/rdd.py", line 1055, in count
    return self.mapPartitions(lambda i: [sum(1 for _ in i)]).sum()
  File "/opt/cloudera/parcels/IMMUTA/python/pyspark/rdd.py", line 1046, in sum
    return self.mapPartitions(lambda x: [sum(x)]).fold(0, operator.add)
  File "/opt/cloudera/parcels/IMMUTA/python/pyspark/rdd.py", line 917, in fold
    vals = self.mapPartitions(func).collect()
  File "/opt/cloudera/parcels/IMMUTA/python/pyspark/rdd.py", line 816, in collect
    sock_info = self.ctx._jvm.PythonRDD.collectAndServe(self._jrdd.rdd())
  File "/opt/cloudera/parcels/IMMUTA/python/pyspark/rdd.py", line 2532, in _jrdd
    self._jrdd_deserializer, profiler)
  File "/opt/cloudera/parcels/IMMUTA/python/pyspark/rdd.py", line 2434, in _wrap_function
    pickled_command, broadcast_vars, env, includes = _prepare_for_python_RDD(sc, command)
  File "/opt/cloudera/parcels/IMMUTA/python/pyspark/rdd.py", line 2420, in _prepare_for_python_RDD
    pickled_command = ser.dumps(command)
  File "/opt/cloudera/parcels/IMMUTA/python/pyspark/serializers.py", line 600, in dumps
    raise pickle.PicklingError(msg)
_pickle.PicklingError: Could not serialize object: TypeError: can't pickle _thread.RLock objects

我认为你不明白这里发生了什么。

您正在 driver 中创建一个 kafaka 连接,然后尝试将该实时连接从驱动程序通过您的网络传送到执行程序以完成工作。(您在 foreachPartitions 中的函数在执行人。)

这就是 spark 告诉您“无法腌制 _thread.RLock 对象”的意思。 (它无法序列化您与 kafka 的实时连接以将其发送给执行程序。)

您需要从 foreachPartition 代码块内部调用 get_kafka_producer() 这将从执行程序内部初始化与数据库的连接。(以及您需要做的任何其他簿记工作。)

仅供参考:我想指出的最糟糕的部分是此代码将在您的本地计算机上运行。这是因为它既是执行者又是驱动者。此外,这将或多或少同时为每个执行者打开与 kafka 的连接。 (5 个执行者 = 5 个打开的连接)。它将为每个分区打开一个连接(默认为 200),因此您要确保在完成后关闭它们。

def send_to_kafka(self,rows):
    print("Writing Data")
    producer = get_kafka_producer()
    self.producer = producer
    #do topic configuration
    for row in rows:
        json_str = json.dumps(row)
        self.producer.send(self.topic, key=None, value=bytes(json_str,'utf-8'))
        self.producer.flush()
    #Do something to close connection

def post_spark_to_kafka(self,df):
    df.foreachPartition(self.send_to_kafka)
    print("Dataframe Posted")