计算一系列马尔可夫链值

Calculate a sequence of Markov chain values

我有一个 Spark 问题,所以对于每个实体的输入 k 我有一个概率序列 p_i 和一个关联的值 v_i,例如数据可以看像这样

entity | Probability | value
A      | 0.8         | 10
A      | 0.6         | 15
A      | 0.3         | 20
B      | 0.8         | 10

然后,对于实体 A,我希望平均值为 0.8*10 + (1-0.8)*0.6*15 + (1-0.8)*(1-0.6)*0.3*20 + (1-0.8)*(1-0.6)*(1-0.3)*MAX_VALUE_DEFINED

如何使用 DataFrame agg func 在 Spark 中实现此目的?考虑到 groupBy 实体和计算结果序列的复杂性,我发现它具有挑战性。

您可以使用 UDF 执行此类自定义计算。这个想法是使用 collect_list to group all probab and values of A into one place so you can loop through it. However, collect_list does not respect the order of your records, therefore might lead to the wrong calculation. One way to fix it is generating ID for each row using monotonically_increasing_id

import pyspark.sql.functions as F

@F.pandas_udf('double')
def markov_udf(values):
    def markov(lst):
        # you can implement your markov logic here
        s = 0
        for i, prob, val in lst:
            s += prob
        return s
    return values.apply(markov)
    
(df
    .withColumn('id', F.monotonically_increasing_id())
    .groupBy('entity')
    .agg(F.array_sort(F.collect_list(F.array('id', 'probability', 'value'))).alias('values'))
    .withColumn('markov', markov_udf('values'))
    .show(10, False)
)

+------+------------------------------------------------------+------+
|entity|values                                                |markov|
+------+------------------------------------------------------+------+
|B     |[[3.0, 0.8, 10.0]]                                    |0.8   |
|A     |[[0.0, 0.8, 10.0], [1.0, 0.6, 15.0], [2.0, 0.3, 20.0]]|1.7   |
+------+------------------------------------------------------+------+

可能有更好的解决方案,但我认为这可以满足您的需求。

from pyspark.sql import functions as F, Window as W
df = spark.createDataFrame(
    [('A', 0.8, 10),
     ('A', 0.6, 15),
     ('A', 0.3, 20),
     ('B', 0.8, 10)],
    ['entity', 'Probability', 'value']
)

w_desc = W.partitionBy('entity').orderBy(F.desc('value'))
w_asc = W.partitionBy('entity').orderBy('value')
df = df.withColumn('_ent_max_val', F.max('value').over(w_desc))
df = df.withColumn('_prob2', 1 - F.col('Probability'))
df = df.withColumn('_cum_prob2', F.product('_prob2').over(w_asc) / F.col('_prob2'))
df = (df.groupBy('entity')
        .agg(F.round((F.max('_ent_max_val') * F.product('_prob2')
                     + F.sum(F.col('_cum_prob2') * F.col('Probability') * F.col('value'))
             ),2).alias('mean_value'))
)
df.show()
# +------+----------+
# |entity|mean_value|
# +------+----------+
# |     A|      11.4|
# |     B|      10.0|
# +------+----------+