将 spark 数据框列传递给 geohash 函数 - pyspark。无法将列转换为布尔值:

Passing spark dataframe columns to geohash function - pyspark. Cannot convert column into bool:

import pygeohash as pgh

pgh.encode(45,55)

'tpzpgxczbzur'

以上步骤效果很好。下面我正在尝试创建一个数据框:

l = [(45,25),(75,22),(85,20),(89,26)]

rdd = sc.parallelize(l)
geoCords = rdd.map(lambda x: Row(lat=x[0], long=int(x[1])))
geoCordsSchema = sqlContext.createDataFrame(geoCords)
geoCordsSchema.show()

+---+----+
|lat|long|
+---+----+
| 45|  25|
| 75|  22|
| 85|  20|
| 89|  26|
+---+----+

这成功创建了一个 spark 数据框。现在我正在使用 Pygeohash 编码,并抛出如下错误:

pgh.encode(geoCordsSchema.lat, geoCordsSchema.long, precision = 7)

Traceback (most recent call last):
   File "<stdin>", line 1, in <module>
   File "/Library/Python/2.7/site-packages/pygeohash/geohash.py", line 96, in encode
   if longitude > mid:
   File "/usr/local/spark/python/pyspark/sql/column.py", line 427, in __nonzero__
   raise ValueError("Cannot convert column into bool: please use '&' for 'and', '|' for 'or', "
ValueError: Cannot convert column into bool: please use '&' for 'and', '|' for 'or', '~' for 'not' when building DataFrame boolean expressions.

你不能直接在某些函数上使用列来转换它。可以使用UDF来实现,

from pyspark.sql import function as F
udf1 = F.udf(lambda x,y: pgh.encode(x,y,precision=7))
geoCordsSchema.select('lat','long',udf1('lat','long').alias('encodedVal')).show()
+---+----+-----------+
|lat|long|encodedeVal|
+---+----+-----------+
| 45|  25|    sxczbzu|
| 75|  22|    umrdst7|
| 85|  20|    urn5x1g|
| 89|  26|    uxf6r9u|
+---+----+-----------+