调用 split() 函数时出现 'split' 的 Pyspark 错误不在列表中
Getting Pyspark error of 'split' is not in list while calling split() function
我创建了一个数据框如下
spark= SparkSession.builder.appName("test").getOrCreate()
categories=spark.read.text("resources/textFile/categories")
categories.show(n=2)
+------------+
| value|
+------------+
|1,2,Football|
| 2,2,Soccer|
+------------+
only showing top 2 rows
现在,当我将此数据框转换为 RDD 并尝试根据“,”(逗号)拆分 RDD 的每一行时
crdd=categories.rdd.map(lambda line: line.split(',')[1])
crdd.foreach(lambda lin : print(lin))
将位置 1 的元素添加到 crdd RDD 时,出现以下错误
Py4JJavaError: An error occurred while calling z:org.apache.spark.api.python.PythonRDD.collectAndServe.
: org.apache.spark.SparkException: Job aborted due to stage failure: Task 0 in stage 13.0 failed 1 times, most recent failure: Lost task 0.0 in stage 13.0 (TID 13, localhost, executor driver): org.apache.spark.api.python.PythonException: Traceback (most recent call last):
File "C:\Users\Downloads\bigdataSetup\spark-2.2.1-bin-hadoop2.7\python\lib\pyspark.zip\pyspark\sql\types.py", line 1504, in __getattr__
idx = self.__fields__.index(item)
ValueError: 'split' is not in list
注意:此处CSV格式的数据只是为了便于复制。
由于您的数据是 CSV 格式,您可以使用 read.csv API:
categories=spark.read.csv("resources/textFile/categories")
修改您的代码如下:
crdd = categories.rdd.map(lambda line: line.value.split(',')[1])
for i in crdd.take(10): print (i)
我创建了一个数据框如下
spark= SparkSession.builder.appName("test").getOrCreate()
categories=spark.read.text("resources/textFile/categories")
categories.show(n=2)
+------------+
| value|
+------------+
|1,2,Football|
| 2,2,Soccer|
+------------+
only showing top 2 rows
现在,当我将此数据框转换为 RDD 并尝试根据“,”(逗号)拆分 RDD 的每一行时
crdd=categories.rdd.map(lambda line: line.split(',')[1])
crdd.foreach(lambda lin : print(lin))
将位置 1 的元素添加到 crdd RDD 时,出现以下错误
Py4JJavaError: An error occurred while calling z:org.apache.spark.api.python.PythonRDD.collectAndServe.
: org.apache.spark.SparkException: Job aborted due to stage failure: Task 0 in stage 13.0 failed 1 times, most recent failure: Lost task 0.0 in stage 13.0 (TID 13, localhost, executor driver): org.apache.spark.api.python.PythonException: Traceback (most recent call last):
File "C:\Users\Downloads\bigdataSetup\spark-2.2.1-bin-hadoop2.7\python\lib\pyspark.zip\pyspark\sql\types.py", line 1504, in __getattr__
idx = self.__fields__.index(item)
ValueError: 'split' is not in list
注意:此处CSV格式的数据只是为了便于复制。
由于您的数据是 CSV 格式,您可以使用 read.csv API:
categories=spark.read.csv("resources/textFile/categories")
修改您的代码如下:
crdd = categories.rdd.map(lambda line: line.value.split(',')[1])
for i in crdd.take(10): print (i)