withColumn with UDF yields AttributeError: 'NoneType' object has no attribute '_jvm'

withColumn with UDF yields AttributeError: 'NoneType' object has no attribute '_jvm'

我正在尝试使用 UDF 替换 spark 数据帧中的一些值,但不断出现相同的错误。

调试时我发现它并不真正取决于我使用的数据框,也不取决于我编写的函数。这是一个 MWE,它具有一个我无法正确执行的简单 lambda 函数。这基本上应该通过将值与自身连接来修改第一列中的所有值。

l = [('Alice', 1)]
df = sqlContext.createDataFrame(l)
df.show()

#+-----+---+
#|   _1| _2|
#+-----+---+
#|Alice|  1|
#+-----+---+

df = df.withColumn("_1", udf(lambda x : lit(x+x), StringType())(df["_1"]))
df.show()
#Alice should now become AliceAlice

这是我得到的错误,提到了一个相当神秘的 "AttributeError: 'NoneType' object has no attribute '_jvm"。

 File "/cdh/opt/cloudera/parcels/CDH-5.11.1-1.cdh5.11.1.p0.4/lib/spark/python/pyspark/worker.py", line 111, in main
    process()
  File "/cdh/opt/cloudera/parcels/CDH-5.11.1-1.cdh5.11.1.p0.4/lib/spark/python/pyspark/worker.py", line 106, in process
    serializer.dump_stream(func(split_index, iterator), outfile)
  File "/cdh/opt/cloudera/parcels/CDH-5.11.1-1.cdh5.11.1.p0.4/lib/spark/python/pyspark/serializers.py", line 263, in dump_stream
    vs = list(itertools.islice(iterator, batch))
  File "/cdh/opt/cloudera/parcels/CDH-5.11.1-1.cdh5.11.1.p0.4/lib/spark/python/pyspark/sql/functions.py", line 1566, in <lambda>
    func = lambda _, it: map(lambda x: returnType.toInternal(f(*x)), it)
  File "<stdin>", line 1, in <lambda>
  File "/cdh/opt/cloudera/parcels/CDH-5.11.1-1.cdh5.11.1.p0.4/lib/spark/python/pyspark/sql/functions.py", line 39, in _
    jc = getattr(sc._jvm.functions, name)(col._jc if isinstance(col, Column) else col)
AttributeError: 'NoneType' object has no attribute '_jvm'

我确定我对语法感到困惑并且无法正确获取类型(感谢 duck typing!),但我发现的每个 withColumn 和 lambda 函数示例似乎都与这个相似。

你很接近,它在抱怨,因为你不能在 udf 中使用 lit :) lit 用于列级别,而不是行级别。

l = [('Alice', 1)]
df = spark.createDataFrame(l)
df.show()

+-----+---+
|   _1| _2|
+-----+---+
|Alice|  1|
+-----+---+

df = df.withColumn("_1", udf(lambda x: x+x, StringType())("_1"))
# this would produce the same result, but lit is not necessary here
# df = df.withColumn("_1", udf(lambda x: x+x, StringType()(lit(df["_1"])))
df.show()

+----------+---+
|        _1| _2|
+----------+---+
|AliceAlice|  1|
+----------+---+