Select 来自 PySpark 数据框的随机行
Select random rows from PySpark dataframe
我想 select 来自 PySpark 数据帧(最好是新 PySpark 数据帧的形式)的 n 个随机行(无需替换)。最好的方法是什么?
以下是包含十行的数据框示例。
+-----+-------------------+-----+
| name| timestamp|value|
+-----+-------------------+-----+
|name1|2019-01-17 00:00:00|11.23|
|name2|2019-01-17 00:00:00|14.57|
|name3|2019-01-10 00:00:00| 2.21|
|name4|2019-01-10 00:00:00| 8.76|
|name5|2019-01-17 00:00:00|18.71|
|name5|2019-01-10 00:00:00|17.78|
|name4|2019-01-10 00:00:00| 5.52|
|name3|2019-01-10 00:00:00| 9.91|
|name1|2019-01-17 00:00:00| 1.16|
|name2|2019-01-17 00:00:00| 12.0|
+-----+-------------------+-----+
上面给出的数据帧是使用以下代码生成的:
from pyspark.sql import *
df_Stats = Row("name", "timestamp", "value")
df_stat1 = df_Stats('name1', "2019-01-17 00:00:00", 11.23)
df_stat2 = df_Stats('name2', "2019-01-17 00:00:00", 14.57)
df_stat3 = df_Stats('name3', "2019-01-10 00:00:00", 2.21)
df_stat4 = df_Stats('name4', "2019-01-10 00:00:00", 8.76)
df_stat5 = df_Stats('name5', "2019-01-17 00:00:00", 18.71)
df_stat6 = df_Stats('name5', "2019-01-10 00:00:00", 17.78)
df_stat7 = df_Stats('name4', "2019-01-10 00:00:00", 5.52)
df_stat8 = df_Stats('name3', "2019-01-10 00:00:00", 9.91)
df_stat9 = df_Stats('name1', "2019-01-17 00:00:00", 1.16)
df_stat10 = df_Stats('name2', "2019-01-17 00:00:00", 12.0)
df_stat_lst = [df_stat1 , df_stat2, df_stat3, df_stat4, df_stat5,
df_stat6, df_stat7, df_stat8, df_stat9, df_stat10]
df = spark.createDataFrame(df_stat_lst)
pyspark.sql.DataFrame
上有一个 sample
方法。这里的 docs 应该会有帮助。
用法:
df.sample(withReplacement=False, fraction=desired_fraction)
我想 select 来自 PySpark 数据帧(最好是新 PySpark 数据帧的形式)的 n 个随机行(无需替换)。最好的方法是什么?
以下是包含十行的数据框示例。
+-----+-------------------+-----+
| name| timestamp|value|
+-----+-------------------+-----+
|name1|2019-01-17 00:00:00|11.23|
|name2|2019-01-17 00:00:00|14.57|
|name3|2019-01-10 00:00:00| 2.21|
|name4|2019-01-10 00:00:00| 8.76|
|name5|2019-01-17 00:00:00|18.71|
|name5|2019-01-10 00:00:00|17.78|
|name4|2019-01-10 00:00:00| 5.52|
|name3|2019-01-10 00:00:00| 9.91|
|name1|2019-01-17 00:00:00| 1.16|
|name2|2019-01-17 00:00:00| 12.0|
+-----+-------------------+-----+
上面给出的数据帧是使用以下代码生成的:
from pyspark.sql import *
df_Stats = Row("name", "timestamp", "value")
df_stat1 = df_Stats('name1', "2019-01-17 00:00:00", 11.23)
df_stat2 = df_Stats('name2', "2019-01-17 00:00:00", 14.57)
df_stat3 = df_Stats('name3', "2019-01-10 00:00:00", 2.21)
df_stat4 = df_Stats('name4', "2019-01-10 00:00:00", 8.76)
df_stat5 = df_Stats('name5', "2019-01-17 00:00:00", 18.71)
df_stat6 = df_Stats('name5', "2019-01-10 00:00:00", 17.78)
df_stat7 = df_Stats('name4', "2019-01-10 00:00:00", 5.52)
df_stat8 = df_Stats('name3', "2019-01-10 00:00:00", 9.91)
df_stat9 = df_Stats('name1', "2019-01-17 00:00:00", 1.16)
df_stat10 = df_Stats('name2', "2019-01-17 00:00:00", 12.0)
df_stat_lst = [df_stat1 , df_stat2, df_stat3, df_stat4, df_stat5,
df_stat6, df_stat7, df_stat8, df_stat9, df_stat10]
df = spark.createDataFrame(df_stat_lst)
pyspark.sql.DataFrame
上有一个 sample
方法。这里的 docs 应该会有帮助。
用法:
df.sample(withReplacement=False, fraction=desired_fraction)