将张量流图实现到 Keras 模型中
Implementing a tensorflow graph into a Keras model
我正在尝试在 Keras(最好)或 Tensorflow 中大致实现以下架构。
___________ _________ _________ ________ ______
| Conv | | Max | | Dense | | | | |
Input0--> | Layer 1 | --> | Pool 1 | --> | Layer | -->| | | |
|_________| |________| |________| | Sum | | Out |
| Layer |-->|_____|
Input1 ----------- Converted to trainable weights-->| |
|_______| |_______|
简而言之,它几乎是一个具有两个输入的模型,使用 Add([input0, input1]) 层合并为一个输出。诀窍是输入之一必须被视为变量 = 可训练权重。
Keras 层 Add() 不允许这样,它把 input0 和 input1 作为不可训练的变量:
input0 = Input((28,28,1))
x = Conv2D(32, kernel_size=(3, 3), activation='relu',input_shape=input_shape)(mod1)
x = Conv2D(64, (3, 3), activation='relu')(input0)
x = MaxPooling2D(pool_size=(2, 2))(x)
x = Flatten()(x)
x = Dense(128, activation='relu')(x)
input1 = Input((128,))
x = Add()([x, input1])
x = Dense(num_classes, activation='softmax')(x)
model = Model(inputs = [mod1,TPM], outputs = x)
model.summary()
model.compile(loss='categorical_crossentropy', optimizer='adam', metrics=['accuracy'])
我可以在 tensorflow 中实现一个图,它添加一个占位符 X 和权重 b,并学习 b 相对于目标 Y 的值。
train_X = numpy.asarray([1.0, 2.0])
train_Y = numpy.asarray([0.0, 2.5])
n_samples = train_X.shape[0]
# tf Graph Input
X = tf.placeholder("float")
Y = tf.placeholder("float")
# Set model weights
b = tf.Variable([0.0, 0.0], name="bias")
# Construct a linear model
pred = tf.add(X, b)
loss = tf.reduce_mean(tf.square(pred - train_Y))
optimizer = tf.train.GradientDescentOptimizer(learning_rate)
grads_and_vars = optimizer.compute_gradients(loss)
train = optimizer.apply_gradients(grads_and_vars)
#init = tf.initialize_all_variables()
init = tf.global_variables_initializer()
sess = tf.Session()
sess.run(init)
for step in range(epochs):
sess.run(train, feed_dict={X: train_X, Y: train_Y})
这完全符合我的要求。输入和权重的简单可优化添加。但我不能将其包含在 Keras 中 model.I 我错过了如何合并这两个想法的步骤。
我如何包含一个仅将一个可训练张量与一个不可训练张量相加的层?
我不确定我是否完全理解您的需求。根据您的张量流代码,我认为您不必输入初始值。在那种情况下,我希望以下内容至少接近您想要的内容:
import numpy as np
import keras
from keras import backend as K
from keras.engine.topology import Layer
from keras.models import Model
from keras.layers import Input, Conv2D, MaxPooling2D, Flatten, Dense, Add
class MyLayer(Layer):
def __init__(self, bias_init, **kwargs):
self.bias_init = bias_init
super(MyLayer, self).__init__(**kwargs)
def build(self, input_shape):
self.bias = self.add_weight(name='bias',
shape=input_shape[1:],
initializer=keras.initializers.Constant(self.bias_init),
trainable=True)
super(MyLayer, self).build(input_shape) # Be sure to call this somewhere!
def call(self, x):
return x + self.bias
input0 = Input((28,28,1))
x = Conv2D(32, kernel_size=(3, 3), activation='relu',input_shape=(28,28,1))(input0)
x = Conv2D(64, (3, 3), activation='relu')(input0)
x = MaxPooling2D(pool_size=(2, 2))(x)
x = Flatten()(x)
x = Dense(128, activation='relu')(x)
input1 = np.random.rand(128)
x = MyLayer(input1)(x)
x = Dense(10, activation='softmax')(x)
model = Model(inputs=input0, outputs=x)
model.summary()
model.compile(loss='categorical_crossentropy', optimizer='adam', metrics=['accuracy'])
我正在尝试在 Keras(最好)或 Tensorflow 中大致实现以下架构。
___________ _________ _________ ________ ______
| Conv | | Max | | Dense | | | | |
Input0--> | Layer 1 | --> | Pool 1 | --> | Layer | -->| | | |
|_________| |________| |________| | Sum | | Out |
| Layer |-->|_____|
Input1 ----------- Converted to trainable weights-->| |
|_______| |_______|
简而言之,它几乎是一个具有两个输入的模型,使用 Add([input0, input1]) 层合并为一个输出。诀窍是输入之一必须被视为变量 = 可训练权重。
Keras 层 Add() 不允许这样,它把 input0 和 input1 作为不可训练的变量:
input0 = Input((28,28,1))
x = Conv2D(32, kernel_size=(3, 3), activation='relu',input_shape=input_shape)(mod1)
x = Conv2D(64, (3, 3), activation='relu')(input0)
x = MaxPooling2D(pool_size=(2, 2))(x)
x = Flatten()(x)
x = Dense(128, activation='relu')(x)
input1 = Input((128,))
x = Add()([x, input1])
x = Dense(num_classes, activation='softmax')(x)
model = Model(inputs = [mod1,TPM], outputs = x)
model.summary()
model.compile(loss='categorical_crossentropy', optimizer='adam', metrics=['accuracy'])
我可以在 tensorflow 中实现一个图,它添加一个占位符 X 和权重 b,并学习 b 相对于目标 Y 的值。
train_X = numpy.asarray([1.0, 2.0])
train_Y = numpy.asarray([0.0, 2.5])
n_samples = train_X.shape[0]
# tf Graph Input
X = tf.placeholder("float")
Y = tf.placeholder("float")
# Set model weights
b = tf.Variable([0.0, 0.0], name="bias")
# Construct a linear model
pred = tf.add(X, b)
loss = tf.reduce_mean(tf.square(pred - train_Y))
optimizer = tf.train.GradientDescentOptimizer(learning_rate)
grads_and_vars = optimizer.compute_gradients(loss)
train = optimizer.apply_gradients(grads_and_vars)
#init = tf.initialize_all_variables()
init = tf.global_variables_initializer()
sess = tf.Session()
sess.run(init)
for step in range(epochs):
sess.run(train, feed_dict={X: train_X, Y: train_Y})
这完全符合我的要求。输入和权重的简单可优化添加。但我不能将其包含在 Keras 中 model.I 我错过了如何合并这两个想法的步骤。
我如何包含一个仅将一个可训练张量与一个不可训练张量相加的层?
我不确定我是否完全理解您的需求。根据您的张量流代码,我认为您不必输入初始值。在那种情况下,我希望以下内容至少接近您想要的内容:
import numpy as np
import keras
from keras import backend as K
from keras.engine.topology import Layer
from keras.models import Model
from keras.layers import Input, Conv2D, MaxPooling2D, Flatten, Dense, Add
class MyLayer(Layer):
def __init__(self, bias_init, **kwargs):
self.bias_init = bias_init
super(MyLayer, self).__init__(**kwargs)
def build(self, input_shape):
self.bias = self.add_weight(name='bias',
shape=input_shape[1:],
initializer=keras.initializers.Constant(self.bias_init),
trainable=True)
super(MyLayer, self).build(input_shape) # Be sure to call this somewhere!
def call(self, x):
return x + self.bias
input0 = Input((28,28,1))
x = Conv2D(32, kernel_size=(3, 3), activation='relu',input_shape=(28,28,1))(input0)
x = Conv2D(64, (3, 3), activation='relu')(input0)
x = MaxPooling2D(pool_size=(2, 2))(x)
x = Flatten()(x)
x = Dense(128, activation='relu')(x)
input1 = np.random.rand(128)
x = MyLayer(input1)(x)
x = Dense(10, activation='softmax')(x)
model = Model(inputs=input0, outputs=x)
model.summary()
model.compile(loss='categorical_crossentropy', optimizer='adam', metrics=['accuracy'])