Lambda 表达式 + pySpark
Lambda Expression + pySpark
我正在尝试将 spark DataFrame 中的列与给定日期进行比较,如果列日期小于给定日期,则添加 n 小时,否则添加 x 小时。
类似于
addhours = lambda x,y: X + 14hrs if (x < y) else X + 10hrs
其中 y 将保存指定的静态日期,然后应用于 DataFrame 列
类似
df = df.withColumn("newDate", checkDate(df.Time, F.lit('2015-01-01') ))
这里是 df 的样本
from pyspark.sql import functions as F
import datetime
df = spark.createDataFrame([('America/NewYork', '2020-02-01 10:00:00'),('Africa/Nairobi', '2020-02-01 10:00:00')],["OriginTz", "Time"])
我对 spark 数据帧有点陌生:)
使用when+othewise
语句代替udf
.
Example:
from pyspark.sql import functions as F
#we are casting to timestamp and date so that we can compare in when
df = spark.createDataFrame([('America/NewYork', '2020-02-01 10:00:00'),('Africa/Nairobi', '2003-02-01 10:00:00')],["OriginTz", "Time"]).\
withColumn("literal",F.lit('2015-01-01').cast("date")).\
withColumn("Time",F.col("Time").cast("timestamp"))
df.show()
#+---------------+-------------------+----------+
#| OriginTz| Time| literal|
#+---------------+-------------------+----------+
#|America/NewYork|2020-02-01 10:00:00|2015-01-01|
#| Africa/Nairobi|2003-02-01 10:00:00|2015-01-01|
#+---------------+-------------------+----------+
#using unix_timestamp function converting to epoch time then adding 10*3600 -> 10 hrs finally converting to timestamp format
df.withColumn("new_date",F.when(F.col("Time") > F.col("literal"),F.to_timestamp(F.unix_timestamp(F.col("Time"),'yyyy-MM-dd HH:mm:ss') + 10 * 3600)).\
otherwise(F.to_timestamp(F.unix_timestamp(F.col("Time"),'yyyy-MM-dd HH:mm:ss') + 14 * 3600))).\
show()
#+---------------+-------------------+----------+-------------------+
#| OriginTz| Time| literal| new_date|
#+---------------+-------------------+----------+-------------------+
#|America/NewYork|2020-02-01 10:00:00|2015-01-01|2020-02-01 20:00:00|
#| Africa/Nairobi|2003-02-01 10:00:00|2015-01-01|2003-02-02 00:00:00|
#+---------------+-------------------+----------+-------------------+
如果您不想添加文字值作为数据框列。
lit_val='2015-01-01'
df = spark.createDataFrame([('America/NewYork', '2020-02-01 10:00:00'),('Africa/Nairobi', '2003-02-01 10:00:00')],["OriginTz", "Time"]).\
withColumn("Time",F.col("Time").cast("timestamp"))
df.withColumn("new_date",F.when(F.col("Time") > F.lit(lit_val).cast("date"),F.to_timestamp(F.unix_timestamp(F.col("Time"),'yyyy-MM-dd HH:mm:ss') + 10 * 3600)).\
otherwise(F.to_timestamp(F.unix_timestamp(F.col("Time"),'yyyy-MM-dd HH:mm:ss') + 14 * 3600))).\
show()
#+---------------+-------------------+----------+-------------------+
#| OriginTz| Time| literal| new_date|
#+---------------+-------------------+----------+-------------------+
#|America/NewYork|2020-02-01 10:00:00|2015-01-01|2020-02-01 20:00:00|
#| Africa/Nairobi|2003-02-01 10:00:00|2015-01-01|2003-02-02 00:00:00|
#+---------------+-------------------+----------+-------------------+
您也可以使用 .expr
和 interval
来执行此操作。这样您就不必转换为另一种格式。
from pyspark.sql import functions as F
df.withColumn("new_date", F.expr("""IF(Time<y, Time + interval 14 hours, Time + interval 10 hours)""")).show()
#+---------------+-------------------+----------+-------------------+
#| OriginTz| Time| y| new_date|
#+---------------+-------------------+----------+-------------------+
#|America/NewYork|2020-02-01 10:00:00|2020-01-01|2020-02-01 20:00:00|
#| Africa/Nairobi|2020-02-01 10:00:00|2020-01-01|2020-02-01 20:00:00|
#+---------------+-------------------+----------+-------------------+
我正在尝试将 spark DataFrame 中的列与给定日期进行比较,如果列日期小于给定日期,则添加 n 小时,否则添加 x 小时。
类似于
addhours = lambda x,y: X + 14hrs if (x < y) else X + 10hrs
其中 y 将保存指定的静态日期,然后应用于 DataFrame 列
类似
df = df.withColumn("newDate", checkDate(df.Time, F.lit('2015-01-01') ))
这里是 df 的样本
from pyspark.sql import functions as F
import datetime
df = spark.createDataFrame([('America/NewYork', '2020-02-01 10:00:00'),('Africa/Nairobi', '2020-02-01 10:00:00')],["OriginTz", "Time"])
我对 spark 数据帧有点陌生:)
使用when+othewise
语句代替udf
.
Example:
from pyspark.sql import functions as F
#we are casting to timestamp and date so that we can compare in when
df = spark.createDataFrame([('America/NewYork', '2020-02-01 10:00:00'),('Africa/Nairobi', '2003-02-01 10:00:00')],["OriginTz", "Time"]).\
withColumn("literal",F.lit('2015-01-01').cast("date")).\
withColumn("Time",F.col("Time").cast("timestamp"))
df.show()
#+---------------+-------------------+----------+
#| OriginTz| Time| literal|
#+---------------+-------------------+----------+
#|America/NewYork|2020-02-01 10:00:00|2015-01-01|
#| Africa/Nairobi|2003-02-01 10:00:00|2015-01-01|
#+---------------+-------------------+----------+
#using unix_timestamp function converting to epoch time then adding 10*3600 -> 10 hrs finally converting to timestamp format
df.withColumn("new_date",F.when(F.col("Time") > F.col("literal"),F.to_timestamp(F.unix_timestamp(F.col("Time"),'yyyy-MM-dd HH:mm:ss') + 10 * 3600)).\
otherwise(F.to_timestamp(F.unix_timestamp(F.col("Time"),'yyyy-MM-dd HH:mm:ss') + 14 * 3600))).\
show()
#+---------------+-------------------+----------+-------------------+
#| OriginTz| Time| literal| new_date|
#+---------------+-------------------+----------+-------------------+
#|America/NewYork|2020-02-01 10:00:00|2015-01-01|2020-02-01 20:00:00|
#| Africa/Nairobi|2003-02-01 10:00:00|2015-01-01|2003-02-02 00:00:00|
#+---------------+-------------------+----------+-------------------+
如果您不想添加文字值作为数据框列。
lit_val='2015-01-01'
df = spark.createDataFrame([('America/NewYork', '2020-02-01 10:00:00'),('Africa/Nairobi', '2003-02-01 10:00:00')],["OriginTz", "Time"]).\
withColumn("Time",F.col("Time").cast("timestamp"))
df.withColumn("new_date",F.when(F.col("Time") > F.lit(lit_val).cast("date"),F.to_timestamp(F.unix_timestamp(F.col("Time"),'yyyy-MM-dd HH:mm:ss') + 10 * 3600)).\
otherwise(F.to_timestamp(F.unix_timestamp(F.col("Time"),'yyyy-MM-dd HH:mm:ss') + 14 * 3600))).\
show()
#+---------------+-------------------+----------+-------------------+
#| OriginTz| Time| literal| new_date|
#+---------------+-------------------+----------+-------------------+
#|America/NewYork|2020-02-01 10:00:00|2015-01-01|2020-02-01 20:00:00|
#| Africa/Nairobi|2003-02-01 10:00:00|2015-01-01|2003-02-02 00:00:00|
#+---------------+-------------------+----------+-------------------+
您也可以使用 .expr
和 interval
来执行此操作。这样您就不必转换为另一种格式。
from pyspark.sql import functions as F
df.withColumn("new_date", F.expr("""IF(Time<y, Time + interval 14 hours, Time + interval 10 hours)""")).show()
#+---------------+-------------------+----------+-------------------+
#| OriginTz| Time| y| new_date|
#+---------------+-------------------+----------+-------------------+
#|America/NewYork|2020-02-01 10:00:00|2020-01-01|2020-02-01 20:00:00|
#| Africa/Nairobi|2020-02-01 10:00:00|2020-01-01|2020-02-01 20:00:00|
#+---------------+-------------------+----------+-------------------+