在pyspark 2中读取文本文件
read textfile in pyspark2
我正在尝试使用 python 读取 spark 2.3 中的文本文件,但出现此错误。
这是文本文件的格式:
name marks
amar 100
babul 70
ram 98
krish 45
代码:
df=spark.read.option("header","true")\
.option("delimiter"," ")\
.option("inferSchema","true")\
.schema(
StructType(
[
StructField("Name",StringType()),
StructField("marks",IntegerType())
]
)
)\
.text("file:/home/maria_dev/prac.txt")
错误:
java.lang.AssertionError: assertion failed: Text data source only
produces a single data column named "value"
当我尝试将文本文件读入 RDD 时,它被收集为一个列。
应该更改数据文件还是更改我的代码?
而不是.text(仅生成单值列)使用.csv
将文件加载到DF。
>>> df=spark.read.option("header","true")\
.option("delimiter"," ")\
.option("inferSchema","true")\
.schema(
StructType(
[
StructField("Name",StringType()),
StructField("marks",IntegerType())
]
)
)\
.csv('file:///home/maria_dev/prac.txt')
>>> from pyspark.sql.types import *
>>> df
DataFrame[Name: string, marks: int]
>>> df.show(10,False)
+-----+-----+
|Name |marks|
+-----+-----+
|amar |100 |
|babul|70 |
|ram |98 |
|krish|45 |
+-----+-----+
我正在尝试使用 python 读取 spark 2.3 中的文本文件,但出现此错误。 这是文本文件的格式:
name marks
amar 100
babul 70
ram 98
krish 45
代码:
df=spark.read.option("header","true")\
.option("delimiter"," ")\
.option("inferSchema","true")\
.schema(
StructType(
[
StructField("Name",StringType()),
StructField("marks",IntegerType())
]
)
)\
.text("file:/home/maria_dev/prac.txt")
错误:
java.lang.AssertionError: assertion failed: Text data source only produces a single data column named "value"
当我尝试将文本文件读入 RDD 时,它被收集为一个列。
应该更改数据文件还是更改我的代码?
而不是.text(仅生成单值列)使用.csv
将文件加载到DF。
>>> df=spark.read.option("header","true")\
.option("delimiter"," ")\
.option("inferSchema","true")\
.schema(
StructType(
[
StructField("Name",StringType()),
StructField("marks",IntegerType())
]
)
)\
.csv('file:///home/maria_dev/prac.txt')
>>> from pyspark.sql.types import *
>>> df
DataFrame[Name: string, marks: int]
>>> df.show(10,False)
+-----+-----+
|Name |marks|
+-----+-----+
|amar |100 |
|babul|70 |
|ram |98 |
|krish|45 |
+-----+-----+