通过匹配 spark rdd 中的小写键来减少

reduce by matching lower case keys in spark rdd

我有一个(键,值)对的rdd,键是字符串,值是字符串出现的次数。

words.take(10)

Out[98]: [('The', 2767),
 ('Project', 83),
 ('the', 3),
 ('of', 14941),
 ('Leo', 4),
 ('is', 3245),
 ('use', 80),
 ('anyone', 191),
 ('Of', 25),
 ('at', 4235)]

我想通过 key.lower() 匹配键,对它们的值求和,并保留每个 upper\lower 案例键的原始值。

另外,我想过滤掉不重复的键。

所以我上面 words.take(10) 示例的输出将是:

 [(('The', 2767),('the', 3),2770),(('Of', 25),('of', 14941),14966)]

您可以将 groupbycollect_listfilter 数据一起使用,如下所示

from pyspark.sql import functions as f

data = [
    ('The', 2767),
    ('Project', 83),
    ('the', 3),
    ('of', 14941),
    ('Leo', 4),
    ('is', 3245),
    ('use', 80),
    ('anyone', 191),
    ('Of', 25),
    ('at', 4235)
]

df = spark.createDataFrame(data).toDF(*["word", "count"])

df.groupby(f.lower("word").alias("word")) \
  .agg(f.collect_list(f.struct("word", "count")).alias("list"), f.sum("count").alias("sum")) \
  .filter(f.size("list") > 1) \
  .select("list", "sum") \
  .show(truncate=False)

输出:

+-----------------------+-----+
|list                   |sum  |
+-----------------------+-----+
|[{The, 2767}, {the, 3}]|2770 |
|[{of, 14941}, {Of, 25}]|14966|
+-----------------------+-----+