如何向 python 中的 groupby 中的聚合添加函数?
How can I add functions to aggregations in groupby in python?
我正在尝试通过聚合之间的额外数学运算来获取 groupby 统计信息
我试过了
...agg({
'id':"count",
'repair':"count",
('repair':"count")/('id':"count")
})
yr id repair
2016 37 27
2017 53 28
分组后我可以通过
获得这个统计数据
gr['repair']/gr['id']*100
yr
2016 0.73
2017 0.53
如何在 groupby 中进行此类计算?
考虑一个 returns 聚合数据集的自定义函数:
def agg_func(g):
g['id'] = g['id'].count()
g['repair'] = g['repair'].count()
g['repair_per_id'] = (g['repair'] / g['id']) * 100
return g.aggregate('max') # CAN ALSO USE: min, max, mean, median, mode
agg_df = (df.groupby(['group'])
.apply(agg_func)
.reset_index(drop=True)
)
使用种子随机数据进行演示:
import numpy as np
import pandas as pd
data_tools = ['sas', 'stata', 'spss', 'python', 'r', 'julia']
np.random.seed(8192019)
random_df = pd.DataFrame({'group': np.random.choice(data_tools, 500),
'id': np.random.randint(1, 10, 500),
'repair': np.random.uniform(0, 100, 500)
})
# RANDOMLY ASSIGN NANs
random_df['repair'].loc[np.random.choice(random_df.index, 75)] = np.nan
# RUN AGGREGATIONS
agg_df = (random_df.groupby(['group'])
.apply(agg_func)
.reset_index(drop=True)
)
print(agg_df)
# group id repair repair_per_id
# 0 julia 79 70 88.607595
# 1 python 89 74 83.146067
# 2 r 82 69 84.146341
# 3 sas 74 66 89.189189
# 4 spss 77 69 89.610390
# 5 stata 99 84 84.848485
我正在尝试通过聚合之间的额外数学运算来获取 groupby 统计信息
我试过了
...agg({
'id':"count",
'repair':"count",
('repair':"count")/('id':"count")
})
yr id repair 2016 37 27 2017 53 28
分组后我可以通过
获得这个统计数据gr['repair']/gr['id']*100
yr 2016 0.73 2017 0.53
如何在 groupby 中进行此类计算?
考虑一个 returns 聚合数据集的自定义函数:
def agg_func(g):
g['id'] = g['id'].count()
g['repair'] = g['repair'].count()
g['repair_per_id'] = (g['repair'] / g['id']) * 100
return g.aggregate('max') # CAN ALSO USE: min, max, mean, median, mode
agg_df = (df.groupby(['group'])
.apply(agg_func)
.reset_index(drop=True)
)
使用种子随机数据进行演示:
import numpy as np
import pandas as pd
data_tools = ['sas', 'stata', 'spss', 'python', 'r', 'julia']
np.random.seed(8192019)
random_df = pd.DataFrame({'group': np.random.choice(data_tools, 500),
'id': np.random.randint(1, 10, 500),
'repair': np.random.uniform(0, 100, 500)
})
# RANDOMLY ASSIGN NANs
random_df['repair'].loc[np.random.choice(random_df.index, 75)] = np.nan
# RUN AGGREGATIONS
agg_df = (random_df.groupby(['group'])
.apply(agg_func)
.reset_index(drop=True)
)
print(agg_df)
# group id repair repair_per_id
# 0 julia 79 70 88.607595
# 1 python 89 74 83.146067
# 2 r 82 69 84.146341
# 3 sas 74 66 89.189189
# 4 spss 77 69 89.610390
# 5 stata 99 84 84.848485