使用一列中的重复值删除 pandas 数据框中的整行

using duplicates values from one column to remove entire row in pandas dataframe

我在下面上传了.csv文件里的数据link

Click here for the data

在这个文件中,我有以下列:

Team    Group    Model   SimStage  Points  GpWinner  GpRunnerup 3rd   4th

团队列中会有重复项。另一列是 SimStage。 Simstage 有一系列包含从 0 到 N 的数据(在本例中为 0 到 4)

我想在每个 Simstage 值处为每个团队保留一行(即,其余部分将被删除)。当我们删除时,将为每个团队和 SimStage 删除列 Points 中具有较低值的重复行。 由于单靠文字解释起来略显困难,所以我在这里附上了一张图片。

在这张图片中,红色框突出显示的行将被删除。

我用了df.duplicates()但是没用

您似乎只想保留 'Points' 列中的最大值。因此,在pandas

中使用first聚合函数

创建数据框并调用它 df

data = {'Team': {0: 'Brazil',  1: 'Brazil',  2: 'Brazil',  3: 'Brazil',  4: 'Brazil',  5: 'Brazil',  6: 'Brazil',  7: 'Brazil',  8: 'Brazil',  9: 'Brazil'},
 'Group': {0: 'Group E',  1: 'Group E',  2: 'Group E',  3: 'Group E',  4: 'Group E',  5: 'Group E',  6: 'Group E',  7: 'Group E',  8: 'Group E',  9: 'Group E'},
 'Model': {0: 'ELO',  1: 'ELO',  2: 'ELO',  3: 'ELO',  4: 'ELO',  5: 'ELO',  6: 'ELO',  7: 'ELO',  8: 'ELO',  9: 'ELO'},
 'SimStage': {0: 0, 1: 0, 2: 1, 3: 1, 4: 2, 5: 2, 6: 3, 7: 3, 8: 4, 9: 4},
 'Points': {0: 4, 1: 4, 2: 4, 3: 4, 4: 4, 5: 1, 6: 2, 7: 4, 8: 4, 9: 1},
 'GpWinner': {0: 0.2,  1: 0.2,  2: 0.2,  3: 0.2,  4: 0.2,  5: 0.0,  6: 0.2,  7: 0.2,  8: 0.2,  9: 0.0},
 'GpRunnerup': {0: 0.0,  1: 0.0,  2: 0.0,  3: 0.0,  4: 0.0,  5: 0.2,  6: 0.0,  7: 0.0,  8: 0.0,  9: 0.2},
 '3rd': {0: 0, 1: 0, 2: 0, 3: 0, 4: 0, 5: 0, 6: 0, 7: 0, 8: 0, 9: 0},
 '4th': {0: 0, 1: 0, 2: 0, 3: 0, 4: 0, 5: 0, 6: 0, 7: 0, 8: 0, 9: 0}}

df = pd.DataFrame(data)

# To be able to output the dataframe in your original order
columns_order = ['Team', 'Group', 'Model', 'SimStage', 'Points', 'GpWinner', 'GpRunnerup', '3rd', '4th']

方法一

# Sort the values by 'Points' descending and 'SimStage' ascending
df = df.sort_values('Points', ascending=False)
df = df.sort_values('SimStage')

# Group the columns by the necessary columns
df = df.groupby(['Team', 'SimStage'], as_index=False).agg('first')

# Output the dataframe in the orginal order
df[columns_order]

Out[]: 
     Team    Group Model  SimStage  Points  GpWinner  GpRunnerup  3rd  4th
0  Brazil  Group E   ELO         0       4       0.2         0.0    0    0
1  Brazil  Group E   ELO         1       4       0.2         0.0    0    0
2  Brazil  Group E   ELO         2       4       0.2         0.0    0    0
3  Brazil  Group E   ELO         3       4       0.2         0.0    0    0
4  Brazil  Group E   ELO         4       4       0.2         0.0    0    0

方法二

df.sort_values('Points', ascending=False).drop_duplicates(['Team', 'SimStage'])[columns_order]
Out[]: 
     Team    Group Model  SimStage  Points  GpWinner  GpRunnerup  3rd  4th
0  Brazil  Group E   ELO         0       4       0.2         0.0    0    0
2  Brazil  Group E   ELO         1       4       0.2         0.0    0    0
4  Brazil  Group E   ELO         2       4       0.2         0.0    0    0
7  Brazil  Group E   ELO         3       4       0.2         0.0    0    0
8  Brazil  Group E   ELO         4       4       0.2         0.0    0    0

我只是根据您在此处的数据集使用 Team、SimStage 和 Points 创建一个迷你数据集。

import pandas as pd

namesDf = pd.DataFrame() 
namesDf['Team'] = ['Brazil', 'Brazil', 'Brazil', 'Brazil', 'Brazil', 'Brazil', 'Brazil', 'Brazil', 'Brazil', 'Brazil']
namesDf['SimStage'] = [0, 0, 1, 1, 2, 2, 3, 3, 4, 4]
namesDf['Points'] = [4, 4, 4, 4, 4, 1, 2, 4, 4, 1]

现在,对于每个 Sim 阶段,您都需要最高点值。所以,我首先将它们按团队和模拟阶段分组,然后按点对它们进行排序。

namesDf = namesDf.groupby(['Team', 'SimStage'], as_index = False).apply(lambda x: x.sort_values(['Points'], ascending = False)).reset_index(drop = True)

这将使我的数据框看起来像这样,请注意 Sim Stage 中值 3 的变化:

     Team  SimStage  Points
0  Brazil         0       4
1  Brazil         0       4
2  Brazil         1       4
3  Brazil         1       4
4  Brazil         2       4
5  Brazil         2       1
6  Brazil         3       4
7  Brazil         3       2
8  Brazil         4       4
9  Brazil         4       1

现在我通过保留每个团队和模拟阶段的第一个实例来删除重复项。

namesDf = namesDf.drop_duplicates(subset=['Team', 'SimStage'], keep = 'first')

最终结果:

     Team  SimStage  Points
0  Brazil         0       4
2  Brazil         1       4
4  Brazil         2       4
6  Brazil         3       4
8  Brazil         4       4