如何计算每个键在 PySpark 数据框中的百分位数?
How compute the percentile in PySpark dataframe for each key?
我有一个由三列 x、y、z 组成的 PySpark 数据框。
X 在此数据框中可能有多行。如何分别计算 x 中每个键的百分位数?
+------+---------+------+
| Name| Role|Salary|
+------+---------+------+
| bob|Developer|125000|
| mark|Developer|108000|
| carl| Tester| 70000|
| carl|Developer|185000|
| carl| Tester| 65000|
| roman| Tester| 82000|
| simon|Developer| 98000|
| eric|Developer|144000|
|carlos| Tester| 75000|
| henry|Developer|110000|
+------+---------+------+
需要的输出:
+------+---------+------+---------+
| Name| Role|Salary| 50%|
+------+---------+------+---------+
| bob|Developer|125000|117500.0 |
| mark|Developer|108000|117500.0 |
| carl| Tester| 70000|72500.0 |
| carl|Developer|185000|117500.0 |
| carl| Tester| 65000|72500.0 |
| roman| Tester| 82000|72500.0 |
| simon|Developer| 98000|117500.0 |
| eric|Developer|144000|117500.0 |
|carlos| Tester| 75000|72500.0 |
| henry|Developer|110000|117500.0 |
+------+---------+------+---------+
您可以尝试 approxQuantile
spark 中提供的功能。
尝试 groupby
+ F.expr
:
import pyspark.sql.functions as F
df1 = df.groupby('Role').agg(F.expr('percentile(Salary, array(0.25))')[0].alias('%25'),
F.expr('percentile(Salary, array(0.50))')[0].alias('%50'),
F.expr('percentile(Salary, array(0.75))')[0].alias('%75'))
df1.show()
输出:
+---------+--------+--------+--------+
| Role| %25| %50| %75|
+---------+--------+--------+--------+
| Tester| 68750.0| 72500.0| 76750.0|
|Developer|108500.0|117500.0|139250.0|
+---------+--------+--------+--------+
现在您可以加入 df1
与原始数据框:
df.join(df1, on='Role', how='left').show()
输出:
+---------+------+------+--------+--------+--------+
| Role| Name|Salary| %25| %50| %75|
+---------+------+------+--------+--------+--------+
| Tester| carl| 70000| 68750.0| 72500.0| 76750.0|
| Tester| carl| 65000| 68750.0| 72500.0| 76750.0|
| Tester| roman| 82000| 68750.0| 72500.0| 76750.0|
| Tester|carlos| 75000| 68750.0| 72500.0| 76750.0|
|Developer| bob|125000|108500.0|117500.0|139250.0|
|Developer| mark|108000|108500.0|117500.0|139250.0|
|Developer| carl|185000|108500.0|117500.0|139250.0|
|Developer| simon| 98000|108500.0|117500.0|139250.0|
|Developer| eric|144000|108500.0|117500.0|139250.0|
|Developer| henry|110000|108500.0|117500.0|139250.0|
+---------+------+------+--------+--------+--------+
array
并不是真的需要:
F.expr('percentile(Salary, 0.5)')
与 window 函数一起完成工作:
df = df.withColumn('50%', F.expr('percentile(Salary, 0.5)').over(W.partitionBy('Role')))
df.show()
# +------+---------+------+--------+
# | Name| Role|Salary| 50%|
# +------+---------+------+--------+
# | bob|Developer|125000|117500.0|
# | mark|Developer|108000|117500.0|
# | carl|Developer|185000|117500.0|
# | simon|Developer| 98000|117500.0|
# | eric|Developer|144000|117500.0|
# | henry|Developer|110000|117500.0|
# | carl| Tester| 70000| 72500.0|
# | carl| Tester| 65000| 72500.0|
# | roman| Tester| 82000| 72500.0|
# |carlos| Tester| 75000| 72500.0|
# +------+---------+------+--------+
我有一个由三列 x、y、z 组成的 PySpark 数据框。
X 在此数据框中可能有多行。如何分别计算 x 中每个键的百分位数?
+------+---------+------+
| Name| Role|Salary|
+------+---------+------+
| bob|Developer|125000|
| mark|Developer|108000|
| carl| Tester| 70000|
| carl|Developer|185000|
| carl| Tester| 65000|
| roman| Tester| 82000|
| simon|Developer| 98000|
| eric|Developer|144000|
|carlos| Tester| 75000|
| henry|Developer|110000|
+------+---------+------+
需要的输出:
+------+---------+------+---------+
| Name| Role|Salary| 50%|
+------+---------+------+---------+
| bob|Developer|125000|117500.0 |
| mark|Developer|108000|117500.0 |
| carl| Tester| 70000|72500.0 |
| carl|Developer|185000|117500.0 |
| carl| Tester| 65000|72500.0 |
| roman| Tester| 82000|72500.0 |
| simon|Developer| 98000|117500.0 |
| eric|Developer|144000|117500.0 |
|carlos| Tester| 75000|72500.0 |
| henry|Developer|110000|117500.0 |
+------+---------+------+---------+
您可以尝试 approxQuantile
spark 中提供的功能。
尝试 groupby
+ F.expr
:
import pyspark.sql.functions as F
df1 = df.groupby('Role').agg(F.expr('percentile(Salary, array(0.25))')[0].alias('%25'),
F.expr('percentile(Salary, array(0.50))')[0].alias('%50'),
F.expr('percentile(Salary, array(0.75))')[0].alias('%75'))
df1.show()
输出:
+---------+--------+--------+--------+
| Role| %25| %50| %75|
+---------+--------+--------+--------+
| Tester| 68750.0| 72500.0| 76750.0|
|Developer|108500.0|117500.0|139250.0|
+---------+--------+--------+--------+
现在您可以加入 df1
与原始数据框:
df.join(df1, on='Role', how='left').show()
输出:
+---------+------+------+--------+--------+--------+
| Role| Name|Salary| %25| %50| %75|
+---------+------+------+--------+--------+--------+
| Tester| carl| 70000| 68750.0| 72500.0| 76750.0|
| Tester| carl| 65000| 68750.0| 72500.0| 76750.0|
| Tester| roman| 82000| 68750.0| 72500.0| 76750.0|
| Tester|carlos| 75000| 68750.0| 72500.0| 76750.0|
|Developer| bob|125000|108500.0|117500.0|139250.0|
|Developer| mark|108000|108500.0|117500.0|139250.0|
|Developer| carl|185000|108500.0|117500.0|139250.0|
|Developer| simon| 98000|108500.0|117500.0|139250.0|
|Developer| eric|144000|108500.0|117500.0|139250.0|
|Developer| henry|110000|108500.0|117500.0|139250.0|
+---------+------+------+--------+--------+--------+
array
并不是真的需要:
F.expr('percentile(Salary, 0.5)')
与 window 函数一起完成工作:
df = df.withColumn('50%', F.expr('percentile(Salary, 0.5)').over(W.partitionBy('Role')))
df.show()
# +------+---------+------+--------+
# | Name| Role|Salary| 50%|
# +------+---------+------+--------+
# | bob|Developer|125000|117500.0|
# | mark|Developer|108000|117500.0|
# | carl|Developer|185000|117500.0|
# | simon|Developer| 98000|117500.0|
# | eric|Developer|144000|117500.0|
# | henry|Developer|110000|117500.0|
# | carl| Tester| 70000| 72500.0|
# | carl| Tester| 65000| 72500.0|
# | roman| Tester| 82000| 72500.0|
# |carlos| Tester| 75000| 72500.0|
# +------+---------+------+--------+