如何创建创建新列并修改现有列的 UDF
How to create a UDF that creates a new column AND modifies an existing column
我有这样一个数据框:
id | color
---| -----
1 | red-dark
2 | green-light
3 | red-light
4 | blue-sky
5 | green-dark
我想创建一个 UDF,使我的数据框变为:
id | color | shade
---| ----- | -----
1 | red | dark
2 | green | light
3 | red | light
4 | blue | sky
5 | green | dark
我为此编写了一个 UDF:
def my_function(data_str):
return ",".join(data_str.split("-"))
my_function_udf = udf(my_function, StringType())
#apply the UDF
df = df.withColumn("shade", my_function_udf(df['color']))
但是,这并没有按照我的预期转换数据框。相反,它变成了:
id | color | shade
---| ---------- | -----
1 | red-dark | red,dark
2 | green-dark | green,light
3 | red-light | red,light
4 | blue-sky | blue,sky
5 | green-dark | green,dark
如何在 pyspark 中根据需要转换数据框?
已根据建议的问题进行尝试
schema = ArrayType(StructType([
StructField("color", StringType(), False),
StructField("shade", StringType(), False)
]))
color_shade_udf = udf(
lambda s: [tuple(s.split("-"))],
schema
)
df = df.withColumn("colorshade", color_shade_udf(df['color']))
#Gives the following
id | color | colorshade
---| ---------- | -----
1 | red-dark | [{"color":"red","shade":"dark"}]
2 | green-dark | [{"color":"green","shade":"dark"}]
3 | red-light | [{"color":"red","shade":"light"}]
4 | blue-sky | [{"color":"blue","shade":"sky"}]
5 | green-dark | [{"color":"green","shade":"dark"}]
感觉离自己越来越近了
您可以使用内置函数split()
:
from pyspark.sql.functions import split, col
df.withColumn("arr", split(df.color, "\-")) \
.select("id",
col("arr")[0].alias("color"),
col("arr")[1].alias("shade")) \
.drop("arr") \
.show()
+---+-----+-----+
| id|color|shade|
+---+-----+-----+
| 1| red| dark|
| 2|green|light|
| 3| red|light|
| 4| blue| sky|
| 5|green| dark|
+---+-----+-----+
我有这样一个数据框:
id | color
---| -----
1 | red-dark
2 | green-light
3 | red-light
4 | blue-sky
5 | green-dark
我想创建一个 UDF,使我的数据框变为:
id | color | shade
---| ----- | -----
1 | red | dark
2 | green | light
3 | red | light
4 | blue | sky
5 | green | dark
我为此编写了一个 UDF:
def my_function(data_str):
return ",".join(data_str.split("-"))
my_function_udf = udf(my_function, StringType())
#apply the UDF
df = df.withColumn("shade", my_function_udf(df['color']))
但是,这并没有按照我的预期转换数据框。相反,它变成了:
id | color | shade
---| ---------- | -----
1 | red-dark | red,dark
2 | green-dark | green,light
3 | red-light | red,light
4 | blue-sky | blue,sky
5 | green-dark | green,dark
如何在 pyspark 中根据需要转换数据框?
已根据建议的问题进行尝试
schema = ArrayType(StructType([
StructField("color", StringType(), False),
StructField("shade", StringType(), False)
]))
color_shade_udf = udf(
lambda s: [tuple(s.split("-"))],
schema
)
df = df.withColumn("colorshade", color_shade_udf(df['color']))
#Gives the following
id | color | colorshade
---| ---------- | -----
1 | red-dark | [{"color":"red","shade":"dark"}]
2 | green-dark | [{"color":"green","shade":"dark"}]
3 | red-light | [{"color":"red","shade":"light"}]
4 | blue-sky | [{"color":"blue","shade":"sky"}]
5 | green-dark | [{"color":"green","shade":"dark"}]
感觉离自己越来越近了
您可以使用内置函数split()
:
from pyspark.sql.functions import split, col
df.withColumn("arr", split(df.color, "\-")) \
.select("id",
col("arr")[0].alias("color"),
col("arr")[1].alias("shade")) \
.drop("arr") \
.show()
+---+-----+-----+
| id|color|shade|
+---+-----+-----+
| 1| red| dark|
| 2|green|light|
| 3| red|light|
| 4| blue| sky|
| 5|green| dark|
+---+-----+-----+