结合 tf.keras.layers 与 Tensorflow 低级 API
Combine tf.keras.layers with Tensorflow low level API
我可以将 tf.keras.layers 与低级 tensorflow 结合使用吗?
代码不正确,但我想做一些类似 that:create 的占位符,稍后将向其提供数据(在 tf.Session() 中)并将该数据提供给我的模型
X, Y = create_placeholders(n_x, n_y)
output = create_model('channels_last')(X)
cost = compute_cost(output, Y)
是的,和使用tf.layers.dense()
一样。使用 tf.keras.layers.Dense()
实际上是最新 tensorflow 版本 1.13
中的首选方式(tf.layers.dense()
已弃用)。例如
import tensorflow as tf
import numpy as np
x_train = np.array([[-1.551, -1.469], [1.022, 1.664]], dtype=np.float32)
y_train = np.array([1, 0], dtype=int)
x = tf.placeholder(tf.float32, shape=[None, 2])
y = tf.placeholder(tf.int32, shape=[None])
with tf.name_scope('network'):
layer1 = tf.keras.layers.Dense(2, input_shape=(2, ))
layer2 = tf.keras.layers.Dense(2, input_shape=(2, ))
fc1 = layer1(x)
logits = layer2(fc1)
with tf.name_scope('loss'):
xentropy = tf.nn.sparse_softmax_cross_entropy_with_logits(labels=y, logits=logits)
loss_fn = tf.reduce_mean(xentropy)
with tf.name_scope('optimizer'):
optimizer = tf.train.GradientDescentOptimizer(0.01)
train_op = optimizer.minimize(loss_fn)
with tf.Session() as sess:
sess.run(tf.global_variables_initializer())
loss_val = sess.run(loss_fn, feed_dict={x:x_train, y:y_train})
_ = sess.run(train_op, feed_dict={x:x_train, y:y_train})
我可以将 tf.keras.layers 与低级 tensorflow 结合使用吗?
代码不正确,但我想做一些类似 that:create 的占位符,稍后将向其提供数据(在 tf.Session() 中)并将该数据提供给我的模型
X, Y = create_placeholders(n_x, n_y)
output = create_model('channels_last')(X)
cost = compute_cost(output, Y)
是的,和使用tf.layers.dense()
一样。使用 tf.keras.layers.Dense()
实际上是最新 tensorflow 版本 1.13
中的首选方式(tf.layers.dense()
已弃用)。例如
import tensorflow as tf
import numpy as np
x_train = np.array([[-1.551, -1.469], [1.022, 1.664]], dtype=np.float32)
y_train = np.array([1, 0], dtype=int)
x = tf.placeholder(tf.float32, shape=[None, 2])
y = tf.placeholder(tf.int32, shape=[None])
with tf.name_scope('network'):
layer1 = tf.keras.layers.Dense(2, input_shape=(2, ))
layer2 = tf.keras.layers.Dense(2, input_shape=(2, ))
fc1 = layer1(x)
logits = layer2(fc1)
with tf.name_scope('loss'):
xentropy = tf.nn.sparse_softmax_cross_entropy_with_logits(labels=y, logits=logits)
loss_fn = tf.reduce_mean(xentropy)
with tf.name_scope('optimizer'):
optimizer = tf.train.GradientDescentOptimizer(0.01)
train_op = optimizer.minimize(loss_fn)
with tf.Session() as sess:
sess.run(tf.global_variables_initializer())
loss_val = sess.run(loss_fn, feed_dict={x:x_train, y:y_train})
_ = sess.run(train_op, feed_dict={x:x_train, y:y_train})