如何拆分重复样本以进行无重叠的训练测试?

How to split duplicate samples to train test with no overlapping?

我有一个 nlp 数据集(大约 300K 个样本),其中存在重复数据。我想把它拆分出来训练测试(70%-30%),它们应该没有重叠。

例如:

|dataset:      |   train   |     test   |
|   a          |     a     |       c    |
|   a          |     a     |       c    |
|   b          |     b     |       c    |
|   b          |     b     |            |
|   b          |     b     |            |
|   c          |     d     |            |
|   c          |     d     |            |
|   c          |           |            |
|   d          |           |            |
|   d          |           |            |

我已经厌倦了穷举随机抽样,但是太费时间了。

如果我没看错,试试这个:

train_inds, test_inds = next(GroupShuffleSplit(test_size=.20, n_splits=2, random_state = 7).split(df, groups=df['duplicate_column']))

train = df.iloc[train_inds]
test = df.iloc[test_inds]

这是可行的,但需要几个步骤才能完成。

import pandas as pd
import numpy as np
from sklearn.model_selection import train_test_split

# original dataset with duplicates
dataset = pd.DataFrame(["a", "a", "b", "b", "b", "c", "c", "c", "d", "d"])

# get unique values, remove duplicates, but keep original counts
data_no_dup, counts = np.unique(dataset, return_counts=True)

# split using the standard Scikit-Learn way
train_no_dup, test_no_dup = train_test_split(data_no_dup, test_size=0.2, random_state=0)

# retrieve original counts
train, test = [], []
for sample in train_no_dup:
    train.extend([sample] * counts[list(data_no_dup).index(sample)])
for sample in test_no_dup:
    test.extend([sample] * counts[list(data_no_dup).index(sample)])

print("Train: {}".format(train))
print("Test: {}".format(test))

输出

Train: ['d', 'd', 'b', 'b', 'b', 'a', 'a']
Test: ['c', 'c', 'c']