创建星火数据框。无法推断类型的模式:<type 'float'>
Create Spark DataFrame. Can not infer schema for type: <type 'float'>
有人可以帮我解决我在使用 Spark DataFrame 时遇到的这个问题吗?
当我执行 myFloatRDD.toDF()
时出现错误:
TypeError: Can not infer schema for type: type 'float'
我不明白为什么...
示例:
myFloatRdd = sc.parallelize([1.0,2.0,3.0])
df = myFloatRdd.toDF()
谢谢
SparkSession.createDataFrame
在后台使用,需要 RDD
/ list
的 Row
/tuple
/list
/ dict
* 或 pandas.DataFrame
,除非提供了带有 DataType
的架构。尝试像这样将浮点数转换为元组:
myFloatRdd.map(lambda x: (x, )).toDF()
甚至更好:
from pyspark.sql import Row
row = Row("val") # Or some other column name
myFloatRdd.map(row).toDF()
要从标量列表创建 DataFrame
,您必须直接使用 SparkSession.createDataFrame
并提供架构***:
from pyspark.sql.types import FloatType
df = spark.createDataFrame([1.0, 2.0, 3.0], FloatType())
df.show()
## +-----+
## |value|
## +-----+
## | 1.0|
## | 2.0|
## | 3.0|
## +-----+
但对于简单的范围,最好使用 SparkSession.range
:
from pyspark.sql.functions import col
spark.range(1, 4).select(col("id").cast("double"))
* 不再支持。
** Spark SQL 还对暴露 __dict__
.
的 Python 对象的模式推理提供有限支持
*** 仅支持 Spark 2.0 或更高版本。
使用反射推断架构
from pyspark.sql import Row
# spark - sparkSession
sc = spark.sparkContext
# Load a text file and convert each line to a Row.
orders = sc.textFile("/practicedata/orders")
#Split on delimiters
parts = orders.map(lambda l: l.split(","))
#Convert to Row
orders_struct = parts.map(lambda p: Row(order_id=int(p[0]), order_date=p[1], customer_id=p[2], order_status=p[3]))
for i in orders_struct.take(5): print(i)
#convert the RDD to DataFrame
orders_df = spark.createDataFrame(orders_struct)
以编程方式指定架构
from pyspark.sql import Row
# spark - sparkSession
sc = spark.sparkContext
# Load a text file and convert each line to a Row.
orders = sc.textFile("/practicedata/orders")
#Split on delimiters
parts = orders.map(lambda l: l.split(","))
#Convert to tuple
orders_struct = parts.map(lambda p: (p[0], p[1], p[2], p[3].strip()))
#convert the RDD to DataFrame
orders_df = spark.createDataFrame(orders_struct)
# The schema is encoded in a string.
schemaString = "order_id order_date customer_id status"
fields = [StructField(field_name, StringType(), True) for field_name in schemaString.split()]
schema = Struct
ordersDf = spark.createDataFrame(orders_struct, schema)
类型(字段)
from pyspark.sql.types import IntegerType, Row
mylist = [1, 2, 3, 4, None ]
l = map(lambda x : Row(x), mylist)
# notice the parens after the type name
df=spark.createDataFrame(l,["id"])
df.where(df.id.isNull() == False).show()
基本上,您需要将 int 初始化为 Row(),然后我们可以使用架构
from pyspark.sql import Row
myFloatRdd.map(lambda x: Row(x)).toDF()
有人可以帮我解决我在使用 Spark DataFrame 时遇到的这个问题吗?
当我执行 myFloatRDD.toDF()
时出现错误:
TypeError: Can not infer schema for type: type 'float'
我不明白为什么...
示例:
myFloatRdd = sc.parallelize([1.0,2.0,3.0])
df = myFloatRdd.toDF()
谢谢
SparkSession.createDataFrame
在后台使用,需要 RDD
/ list
的 Row
/tuple
/list
/ * 或 dict
pandas.DataFrame
,除非提供了带有 DataType
的架构。尝试像这样将浮点数转换为元组:
myFloatRdd.map(lambda x: (x, )).toDF()
甚至更好:
from pyspark.sql import Row
row = Row("val") # Or some other column name
myFloatRdd.map(row).toDF()
要从标量列表创建 DataFrame
,您必须直接使用 SparkSession.createDataFrame
并提供架构***:
from pyspark.sql.types import FloatType
df = spark.createDataFrame([1.0, 2.0, 3.0], FloatType())
df.show()
## +-----+
## |value|
## +-----+
## | 1.0|
## | 2.0|
## | 3.0|
## +-----+
但对于简单的范围,最好使用 SparkSession.range
:
from pyspark.sql.functions import col
spark.range(1, 4).select(col("id").cast("double"))
* 不再支持。
** Spark SQL 还对暴露 __dict__
.
*** 仅支持 Spark 2.0 或更高版本。
使用反射推断架构
from pyspark.sql import Row
# spark - sparkSession
sc = spark.sparkContext
# Load a text file and convert each line to a Row.
orders = sc.textFile("/practicedata/orders")
#Split on delimiters
parts = orders.map(lambda l: l.split(","))
#Convert to Row
orders_struct = parts.map(lambda p: Row(order_id=int(p[0]), order_date=p[1], customer_id=p[2], order_status=p[3]))
for i in orders_struct.take(5): print(i)
#convert the RDD to DataFrame
orders_df = spark.createDataFrame(orders_struct)
以编程方式指定架构
from pyspark.sql import Row
# spark - sparkSession
sc = spark.sparkContext
# Load a text file and convert each line to a Row.
orders = sc.textFile("/practicedata/orders")
#Split on delimiters
parts = orders.map(lambda l: l.split(","))
#Convert to tuple
orders_struct = parts.map(lambda p: (p[0], p[1], p[2], p[3].strip()))
#convert the RDD to DataFrame
orders_df = spark.createDataFrame(orders_struct)
# The schema is encoded in a string.
schemaString = "order_id order_date customer_id status"
fields = [StructField(field_name, StringType(), True) for field_name in schemaString.split()]
schema = Struct
ordersDf = spark.createDataFrame(orders_struct, schema)
类型(字段)
from pyspark.sql.types import IntegerType, Row
mylist = [1, 2, 3, 4, None ]
l = map(lambda x : Row(x), mylist)
# notice the parens after the type name
df=spark.createDataFrame(l,["id"])
df.where(df.id.isNull() == False).show()
基本上,您需要将 int 初始化为 Row(),然后我们可以使用架构
from pyspark.sql import Row
myFloatRdd.map(lambda x: Row(x)).toDF()