是否可能有一个具有多个输出的回归模型?
is it possible to have a regression model with more than one output?
我正在使用 Tensorflow 处理 CNN 回归模型。我想知道是否可以使用回归从我的数据集中估计多个数据?
(换句话说,我想用头和双手的位置和旋转来估计人体肩膀和肘部的位置(x,y,z)和旋转(俯仰,偏航,滚动))
所以,我的模型的输出应该是每个关节的 6 个值。 (例如:肘部)
这也是我的代码示例:(我使用 tf.session 来训练模型)
# define placeholder for inputs to network
xs = tf.placeholder(tf.float32, [None, 42], name ='input_node')
ys = tf.placeholder(tf.float32, [None, 1])
keep_prob = tf.placeholder(tf.float32)
#Network computations and Layers
x_image = tf.reshape(xs, [-1, 3, 3,1])
## conv1 layer
W_conv1 = weight_func([3, 3, 1, 32])
b_conv1 = bias_func([32])
h_conv1 = tf.nn.relu(conv2d(x_image, W_conv1) + b_conv1)
h_pool1 = max_pool_2x2(h_conv1)
# h_drop1 = tf.nn.dropout(h_conv1, keep_prob)
## conv2 layer
W_conv2 = weight_func([3, 3, 32, 64])
b_conv2 = bias_func([64])
h_conv2 = tf.nn.relu(conv2d(h_conv1, W_conv2) + b_conv2)
# h_drop2 = tf.nn.dropout(h_conv2, keep_prob)
## conv3 layer
W_conv3 = weight_func([3, 3, 64, 128])
b_conv3 = bias_func([128])
h_conv3 = tf.nn.relu(conv2d(h_conv2, W_conv3) + b_conv3)
# h_drop3 = tf.nn.dropout(h_conv3, keep_prob)
## conv4 layer
W_conv4 = weight_func([3, 3, 128,256])
b_conv4 = bias_func([256])
h_conv4 = tf.nn.relu(conv2d(h_conv3, W_conv4) + b_conv4)
# h_drop4 = tf.nn.dropout(h_conv4, keep_prob)
## fc1 layer
W_fc1 = weight_func([3 * 3 * 256, 2304])
b_fc1 = bias_func([256])
h_pool2_flat = tf.reshape(h_conv4, [-1, 3* 3 * 256])
h_fc1 = tf.nn.relu(tf.matmul(h_pool2_flat, W_fc1) + b_fc1)
h_fc1_drop = tf.nn.dropout(h_fc1, keep_prob)
# fc2 layer ## full connection
W_fc2 = weight_func([2304, 1])
b_fc2 = bias_func([1])
prediction = tf.add(tf.matmul(h_fc1_drop, W_fc2) , b_fc2, name= 'output_node')
cross_entropy = tf.reduce_mean(tf.reduce_sum(tf.square(ys - prediction), reduction_indices=[1]))
sess = tf.Session()
train_step = tf.train.AdamOptimizer(0.01).minimize(cross_entropy)
sess.run(tf.global_variables_initializer())
init=tf.global_variables_initializer()
for i in range(50):
sess.run([train_step] , feed_dict = {xs: train_x, ys: train_y, keep_prob: 0.7})
prediction_value = sess.run(prediction, feed_dict={xs: test_x, ys: test_y, keep_prob: 1.0})`
当然可以,只需创建第二个输出即可。不幸的是,我无法告诉您如何为 Tensorflow 1.14 执行此操作,但在 TF2.0 中就像我在另一个模型中所做的那样:
output_categorical = layer.Dense(5, activation="softmax")(dense_layer_out_cat)
output_continuus = layer.Dense(1, activation="sigmoid")(dense_layer_out_con)
model = tf.keras.Model(inputs=[layer_input_categorical, layer_input_categorical_2, layer_input_continuus], \
outputs=[output_categorical, output_continuus])
model.compile(optimizer="Nadam", loss=["mse","sparse_categorical_crossentropy"])
在这段代码中您可以看到一个模型,输出一个回归值和一个分类值(用于分类)。最后就是这样,没有什么大魔法。只需创建两个带有 sigmoid 激活的输出层,告诉模型将两个层都用作输出,定义 two!损失函数(可能是相同的 6 次,如果你进行 6 次回归)就是这样。
拥有多个输出也意味着,您当然需要与输出一样多的 y-label 值,因此在数据准备过程中请记住这一点。
我正在使用 Tensorflow 处理 CNN 回归模型。我想知道是否可以使用回归从我的数据集中估计多个数据? (换句话说,我想用头和双手的位置和旋转来估计人体肩膀和肘部的位置(x,y,z)和旋转(俯仰,偏航,滚动))
所以,我的模型的输出应该是每个关节的 6 个值。 (例如:肘部) 这也是我的代码示例:(我使用 tf.session 来训练模型)
# define placeholder for inputs to network
xs = tf.placeholder(tf.float32, [None, 42], name ='input_node')
ys = tf.placeholder(tf.float32, [None, 1])
keep_prob = tf.placeholder(tf.float32)
#Network computations and Layers
x_image = tf.reshape(xs, [-1, 3, 3,1])
## conv1 layer
W_conv1 = weight_func([3, 3, 1, 32])
b_conv1 = bias_func([32])
h_conv1 = tf.nn.relu(conv2d(x_image, W_conv1) + b_conv1)
h_pool1 = max_pool_2x2(h_conv1)
# h_drop1 = tf.nn.dropout(h_conv1, keep_prob)
## conv2 layer
W_conv2 = weight_func([3, 3, 32, 64])
b_conv2 = bias_func([64])
h_conv2 = tf.nn.relu(conv2d(h_conv1, W_conv2) + b_conv2)
# h_drop2 = tf.nn.dropout(h_conv2, keep_prob)
## conv3 layer
W_conv3 = weight_func([3, 3, 64, 128])
b_conv3 = bias_func([128])
h_conv3 = tf.nn.relu(conv2d(h_conv2, W_conv3) + b_conv3)
# h_drop3 = tf.nn.dropout(h_conv3, keep_prob)
## conv4 layer
W_conv4 = weight_func([3, 3, 128,256])
b_conv4 = bias_func([256])
h_conv4 = tf.nn.relu(conv2d(h_conv3, W_conv4) + b_conv4)
# h_drop4 = tf.nn.dropout(h_conv4, keep_prob)
## fc1 layer
W_fc1 = weight_func([3 * 3 * 256, 2304])
b_fc1 = bias_func([256])
h_pool2_flat = tf.reshape(h_conv4, [-1, 3* 3 * 256])
h_fc1 = tf.nn.relu(tf.matmul(h_pool2_flat, W_fc1) + b_fc1)
h_fc1_drop = tf.nn.dropout(h_fc1, keep_prob)
# fc2 layer ## full connection
W_fc2 = weight_func([2304, 1])
b_fc2 = bias_func([1])
prediction = tf.add(tf.matmul(h_fc1_drop, W_fc2) , b_fc2, name= 'output_node')
cross_entropy = tf.reduce_mean(tf.reduce_sum(tf.square(ys - prediction), reduction_indices=[1]))
sess = tf.Session()
train_step = tf.train.AdamOptimizer(0.01).minimize(cross_entropy)
sess.run(tf.global_variables_initializer())
init=tf.global_variables_initializer()
for i in range(50):
sess.run([train_step] , feed_dict = {xs: train_x, ys: train_y, keep_prob: 0.7})
prediction_value = sess.run(prediction, feed_dict={xs: test_x, ys: test_y, keep_prob: 1.0})`
当然可以,只需创建第二个输出即可。不幸的是,我无法告诉您如何为 Tensorflow 1.14 执行此操作,但在 TF2.0 中就像我在另一个模型中所做的那样:
output_categorical = layer.Dense(5, activation="softmax")(dense_layer_out_cat)
output_continuus = layer.Dense(1, activation="sigmoid")(dense_layer_out_con)
model = tf.keras.Model(inputs=[layer_input_categorical, layer_input_categorical_2, layer_input_continuus], \
outputs=[output_categorical, output_continuus])
model.compile(optimizer="Nadam", loss=["mse","sparse_categorical_crossentropy"])
在这段代码中您可以看到一个模型,输出一个回归值和一个分类值(用于分类)。最后就是这样,没有什么大魔法。只需创建两个带有 sigmoid 激活的输出层,告诉模型将两个层都用作输出,定义 two!损失函数(可能是相同的 6 次,如果你进行 6 次回归)就是这样。 拥有多个输出也意味着,您当然需要与输出一样多的 y-label 值,因此在数据准备过程中请记住这一点。