JavaPackage object is not callable error: Pyspark
JavaPackage object is not callable error: Pyspark
像 dataframe.show() 和 sQLContext.read.json 这样的操作工作正常,但大多数函数给出 "JavaPackage object is not callable error" 。
例如:当我做
dataFrame.withColumn(field_name, monotonically_increasing_id())
我收到一个错误
File "/tmp/spark-cd423f35-9572-45ee-b159-1b2732afa2a6/userFiles-3a6e1729-95f4-468b-914c-c706369bf2a6/Transformations.py", line 64, in add_id_column
self.dataFrame = self.dataFrame.withColumn(field_name, monotonically_increasing_id())
File "/home/himaprasoon/apps/spark-1.6.0-bin-hadoop2.6/python/pyspark/sql/functions.py", line 347, in monotonically_increasing_id
return Column(sc._jvm.functions.monotonically_increasing_id())
TypeError: 'JavaPackage' object is not callable
我正在使用 apache-zeppelin 解释器并将 py4j 添加到 python 路径。
当我做
import py4j
print(dir(py4j))
导入成功
['__builtins__', '__cached__', '__doc__', '__file__', '__loader__', '__name__', '__package__', '__path__', '__spec__', 'compat', 'finalizer', 'java_collections', 'java_gateway', 'protocol', 'version']
当我尝试时
print(sc._jvm.functions)
在 pyspark 中 shell 它打印
<py4j.java_gateway.JavaClass object at 0x7fdaf9727ba8>
但是当我在我的解释器中尝试这个时它打印
<py4j.java_gateway.JavaPackage object at 0x7f07cc3f77f0>
在 zeppelin 解释器代码中
java_import(gateway.jvm, "org.apache.spark.sql.*")
没有被执行。将其添加到导入中解决了问题
像 dataframe.show() 和 sQLContext.read.json 这样的操作工作正常,但大多数函数给出 "JavaPackage object is not callable error" 。 例如:当我做
dataFrame.withColumn(field_name, monotonically_increasing_id())
我收到一个错误
File "/tmp/spark-cd423f35-9572-45ee-b159-1b2732afa2a6/userFiles-3a6e1729-95f4-468b-914c-c706369bf2a6/Transformations.py", line 64, in add_id_column
self.dataFrame = self.dataFrame.withColumn(field_name, monotonically_increasing_id())
File "/home/himaprasoon/apps/spark-1.6.0-bin-hadoop2.6/python/pyspark/sql/functions.py", line 347, in monotonically_increasing_id
return Column(sc._jvm.functions.monotonically_increasing_id())
TypeError: 'JavaPackage' object is not callable
我正在使用 apache-zeppelin 解释器并将 py4j 添加到 python 路径。
当我做
import py4j
print(dir(py4j))
导入成功
['__builtins__', '__cached__', '__doc__', '__file__', '__loader__', '__name__', '__package__', '__path__', '__spec__', 'compat', 'finalizer', 'java_collections', 'java_gateway', 'protocol', 'version']
当我尝试时
print(sc._jvm.functions)
在 pyspark 中 shell 它打印
<py4j.java_gateway.JavaClass object at 0x7fdaf9727ba8>
但是当我在我的解释器中尝试这个时它打印
<py4j.java_gateway.JavaPackage object at 0x7f07cc3f77f0>
在 zeppelin 解释器代码中
java_import(gateway.jvm, "org.apache.spark.sql.*")
没有被执行。将其添加到导入中解决了问题