将字典中的数据洗牌用于测试和训练数据

Shuffle Data from Dictionary for Test and Train Data

我想将我从字典和单独数组中获得的数据拆分为训练数据和测试数据。我尝试了各种方法,但没有到达那里。由于这些功能在我的管道中是如何预处理的,因此我最初需要将这些功能保留为字典。社区中有人对此有任何建议吗?

词典(特征值):

{'input1': array([42., 50., 68., ..., 60., 46., 60.]),
 'input2': array([[-2.00370455, -2.35689664, -1.96147382, ...,  2.11014128,
          2.59383321,  1.24209607],
        [-1.97130549, -2.19063663, -2.02996445, ...,  2.32125568,
          2.27316046,  1.48600614],
        [-2.01526666, -2.40440917, -1.94321752, ...,  2.15266657,
          2.68460488,  1.23534095],
        ...,
        [-2.1359458 , -2.52428007, -1.75701785, ...,  2.25480819,
          2.68114281,  1.75468981],
        [-1.95868206, -2.23297167, -1.96401751, ...,  2.07427239,
          2.60306072,  1.28556955],
        [-1.80507278, -2.62199521, -2.08697271, ...,  2.34080577,
          2.48254585,  1.52028871]])}

目标值

y = array([0.83, 0.4 , 0.53, ..., 0.  , 0.94, 1. ])
Shape: (3000,)

创建词典

#Dictionary Values
input1 = embeddings.numpy()
input2 = df['feature'].values
y = df['target'].values

full_model_inputs = [input1 , embeddings]
original_model_inputs = dict(input1 = input1 , input2 = input2 )

拆分数据

x_train, x_test, y_train, y_test = train_test_split([original_model_inputs['input1'], 
                                                     original_model_inputs['input2']], y, test_size = 0.2, random_state = 6)

x_train, x_test, y_train, y_test = train_test_split(original_model_inputs, y, test_size = 0.2, random_state = 6)

错误信息

ValueError: Found input variables with inconsistent numbers of samples: [2, 3000]

输入 1:

[55., 46., 46., ..., 60., 60., 45.]

Shape: (3000,)

输入2:

[[-2.00370455, -2.35689664, -1.96147382, ...,  2.11014128,
         2.59383321,  1.24209607],
       [-1.97130549, -2.19063663, -2.02996445, ...,  2.32125568,
         2.27316046,  1.48600614],
       [-2.01526666, -2.40440917, -1.94321752, ...,  2.15266657,
         2.68460488,  1.23534095],
       ...,
       [-2.1359458 , -2.52428007, -1.75701785, ...,  2.25480819,
         2.68114281,  1.75468981],
       [-1.95868206, -2.23297167, -1.96401751, ...,  2.07427239,
         2.60306072,  1.28556955],
       [-1.80507278, -2.62199521, -2.08697271, ...,  2.34080577,
         2.48254585,  1.52028871]]

Shape: (3000, 3840)

模型构建

input1= Input(shape = (1, ))
input2= Input(shape = (3840, ))

# The first branch operates on the first input
x = Dense(units = 128, activation="relu")(input1)
x = BatchNormalization()(x)
x = Dense(units = 128, activation="relu")(x)
x = BatchNormalization()(x)
x = Model(inputs=input1, outputs=x)

# The second branch operates on the second input (Embeddings)
y = Dense(units = 128, activation="relu")(input2)
y = BatchNormalization()(y)
y = Dense(units = 128, activation="relu")(y)
y = BatchNormalization()(y)  
y = Model(inputs=input2, outputs=y)

# combine the output of the two branches
combined = Concatenate()([x.output, y.output])

out = Dense(128, activation='relu')(combined)
out = Dropout(0.5)(out)
out = Dense(1)(out)

# The model will accept the inputs of the two branches and then output a single value
model = Model(inputs = [x.input, y.input], outputs = out)
model.compile(loss='mse', optimizer = Adam(lr = 0.001), metrics = ['mse'])

model.fit([X1,X2], Y, epochs=3)

将您的字典放入 pandas df 中,它将保留数据维度并根据需要拆分:

df = pd.DataFrame({"input1":original_model_inputs["input1"],  
                   "input2":list(original_model_inputs["input2"])})
X_train, X_test, y_train, y_test = train_test_split(df,y)

转换回原始格式:

X_train = X_train.to_dict("list")
X_test = X_test.to_dict("list")

编辑

为了让您的管道保持正常运行,您可能需要添加以下两行:

X_train = {k:np.array(v) for k,v in X_train.items()}
X_test = {k:np.array(v) for k,v in X_test.items()}

您在调用 train_test_split 时将嵌套列表作为 X 提供,这会引发错误。相反,您可以从字典中构建一个二维特征数组,然后拆分为训练和测试。举个例子:

d = {'input1': np.random.random((10,)),
     'input2': np.random.random((10,3))}
y = np.random.choice([0,1],10)

如果字典中的一个数组只有一个维度,我们可以只添加一个轴,然后将结果连接成一个二维数组:

X = [a[:,None] if len(a.shape)==1 else a for a in d.values()]
X_train, X_test, y_train, y_test = train_test_split(np.concatenate(X, axis=1), y)