"InvalidArgumentError: Incompatible shapes: [64,40000] vs. [64] [Op:Mul]" while doing operations between tensors?
"InvalidArgumentError: Incompatible shapes: [64,40000] vs. [64] [Op:Mul]" while doing operations between tensors?
我正在尝试在两个张量之间执行此操作:
green_mat = sio.loadmat('green.mat')
green = np.array(green_mat['G2'])
green = tf.convert_to_tensor(green)
green = tf.cast(green, dtype='complex64') # >>>green.shape = TensorShape([64, 40000])
tensor = tf.ones(128,1) # tensor.shape = TensorShape([128])
def mul_and_sum(tensor):
real = tensor[0:64]
imag = tensor[64:128]
complex_tensor = tf.complex(real, imag)
return tf.reduce_sum((tf.multiply(green, complex_tensor), 1))
res = mul_and_sum(tensor)
基本上,最后我想要获得的是一个具有 40000 个元素的张量,用作神经网络的一个层,但是当我 运行 这个函数作为测试时,我有这个错误:
tensorflow.python.framework.errors_impl.InvalidArgumentError: Incompatible shapes: [64,40000] vs. [64] [Op:Mul]
这是我第一次从事张量运算,也许我对如何处理维度有点困惑,有什么建议吗?谢谢:)
编辑:好的,我已经理解了这一点,确实对于我提供的一切都有效的例子,但是我的网络又遇到了另一个问题:
def convolution(tensor):
tf.cast(tensor, dtype='float64')
real = tensor[0:64]
imag = tensor[64:128]
complex_tensor = tf.complex(real, imag)
a = tf.math.real(tf.reduce_sum((tf.multiply(green, complex_tensor)), 0))
return a
def get_model3(mask_kind):
epochs = 200
learning_rate = 0.1
decay_rate = learning_rate / epochs
inp_1 = keras.Input(shape=(64, 101, 129), name="RST_inputs")
x = layers.Conv2D(1, kernel_size=(1, 1), strides=(1, 1), padding="valid", trainable=False)(inp_1)
x = layers.Conv2D(256, kernel_size=(3, 3), kernel_regularizer=l2(1e-6), strides=(3, 3), padding="same")(x)
x = layers.LeakyReLU(alpha=0.3)(x)
x = layers.Conv2D(128, kernel_size=(3, 3), kernel_regularizer=l2(1e-6), strides=(3, 3), padding="same")(x)
x = layers.LeakyReLU(alpha=0.3)(x)
x = layers.Conv2D(64, kernel_size=(2, 2), kernel_regularizer=l2(1e-6), strides=(2, 2), padding="same")(x)
x = layers.LeakyReLU(alpha=0.3)(x)
x = layers.Conv2D(32, kernel_size=(2, 2), kernel_regularizer=l2(1e-6), strides=(2, 2), padding="same")(x)
x = layers.LeakyReLU(alpha=0.3)(x)
x = layers.Flatten()(x)
x = layers.Dense(512)(x)
x = layers.LeakyReLU(alpha=0.3)(x)
x = layers.Dense(256)(x)
x = layers.LeakyReLU(alpha=0.3)(x)
out1 = layers.Dense(128, name="ls_weights")(x)
if mask_kind == 1:
binary_mask = layers.Lambda(mask_layer1, name="lambda_layer", dtype='float64')(out1)
elif mask_kind == 2:
binary_mask = layers.Lambda(mask_layer2, name="lambda_layer", dtype='float64')(out1)
else:
binary_mask = out1
#here the binary mask shape is [?,128]
binary_mask = tf.expand_dims(binary_mask, axis=2) #here the shape is [?,128,1]
binary_mask = tf.squeeze(binary_mask, axis=0) #here the shape is [128,1]
print('binary shape:', binary_mask.shape)
lambda_layer = layers.Lambda(convolution, name="convolutional_layer")(binary_mask)
print(lambda_layer.shape)
model3 = keras.Model(inp_1, lambda_layer, name="2_out_model")
model3.compile(optimizer=tf.keras.optimizers.Adam(learning_rate=decay_rate), # in caso
# rimettere 0.001
loss="mean_squared_error")
plot_model(model3, to_file='model.png', show_shapes=True, show_layer_names=True)
model3.summary()
return model3
我得到这个错误:
ValueError: Input 0 of layer sf_vec is incompatible with the layer: : expected min_ndim=2, found ndim=1. Full shape received: [40000]
我知道这是因为维度之间不匹配,但事实是输出层(张量)的形状应该是 [?,40000] 而我只得到了 [40000 的张量],有什么建议吗?
EDIT 2.0 我没有注意到我的输出已经是 lambda 层,所以在编写模型的方式上我没有收到任何错误,但是从摘要中以这种方式我得到 lambda 形状 (1, 40000) 而通常应该是 (None,40000)。
哪里出错了?
如果你想在 2 个张量之间进行乘法运算,它们需要具有兼容的形状,即相同的形状,或者 broadcastable 的形状。引用 numpy 文档(tensorflow 遵循相同的广播规则):
When operating on two arrays, NumPy compares their shapes element-wise. It starts with the trailing dimensions and works its way forward. Two dimensions are compatible when
- they are equal, or
- one of them is 1
在您的例子中,如果您想使用 tf.multiply
,您需要向向量添加一个维度,使其具有相同的维度数。您可以通过使用 tf.expand_dims
或使用 tf.newaxis
.
的高级索引来做到这一点
一个例子(使用复合物,就像你的问题):
>>> a = tf.complex(tf.random.normal((64,128)),tf.random.normal((64,128)))
>>> a.shape
TensorShape([64, 128])
>>> b = tf.complex(tf.ones(64),tf.ones(64))
>>> b.shape
TensorShape([64])
为了能够使用 tf.multiply
,您需要向 b
添加维度:
>>> b_exp = tf.exand_dims(b, axis=1)
>>> b_exp.shape
TensorShape([64, 1])
>>> tf.multiply(a,b_exp).shape
TensorShape([64, 128])
注意:在 tf.multiply
上执行 tf.reduce_sum
类似于执行 matrix multiplication
对于你的情况,你可能可以做类似于
的事情
>>> tf.matmul(b[tf.newaxis,:], a).shape
TensorShape([1, 128])
如果额外的维度困扰您,您可以使用 tf.squeeze
摆脱它。
我正在尝试在两个张量之间执行此操作:
green_mat = sio.loadmat('green.mat')
green = np.array(green_mat['G2'])
green = tf.convert_to_tensor(green)
green = tf.cast(green, dtype='complex64') # >>>green.shape = TensorShape([64, 40000])
tensor = tf.ones(128,1) # tensor.shape = TensorShape([128])
def mul_and_sum(tensor):
real = tensor[0:64]
imag = tensor[64:128]
complex_tensor = tf.complex(real, imag)
return tf.reduce_sum((tf.multiply(green, complex_tensor), 1))
res = mul_and_sum(tensor)
基本上,最后我想要获得的是一个具有 40000 个元素的张量,用作神经网络的一个层,但是当我 运行 这个函数作为测试时,我有这个错误:
tensorflow.python.framework.errors_impl.InvalidArgumentError: Incompatible shapes: [64,40000] vs. [64] [Op:Mul]
这是我第一次从事张量运算,也许我对如何处理维度有点困惑,有什么建议吗?谢谢:)
编辑:好的,我已经理解了这一点,确实对于我提供的一切都有效的例子,但是我的网络又遇到了另一个问题:
def convolution(tensor):
tf.cast(tensor, dtype='float64')
real = tensor[0:64]
imag = tensor[64:128]
complex_tensor = tf.complex(real, imag)
a = tf.math.real(tf.reduce_sum((tf.multiply(green, complex_tensor)), 0))
return a
def get_model3(mask_kind):
epochs = 200
learning_rate = 0.1
decay_rate = learning_rate / epochs
inp_1 = keras.Input(shape=(64, 101, 129), name="RST_inputs")
x = layers.Conv2D(1, kernel_size=(1, 1), strides=(1, 1), padding="valid", trainable=False)(inp_1)
x = layers.Conv2D(256, kernel_size=(3, 3), kernel_regularizer=l2(1e-6), strides=(3, 3), padding="same")(x)
x = layers.LeakyReLU(alpha=0.3)(x)
x = layers.Conv2D(128, kernel_size=(3, 3), kernel_regularizer=l2(1e-6), strides=(3, 3), padding="same")(x)
x = layers.LeakyReLU(alpha=0.3)(x)
x = layers.Conv2D(64, kernel_size=(2, 2), kernel_regularizer=l2(1e-6), strides=(2, 2), padding="same")(x)
x = layers.LeakyReLU(alpha=0.3)(x)
x = layers.Conv2D(32, kernel_size=(2, 2), kernel_regularizer=l2(1e-6), strides=(2, 2), padding="same")(x)
x = layers.LeakyReLU(alpha=0.3)(x)
x = layers.Flatten()(x)
x = layers.Dense(512)(x)
x = layers.LeakyReLU(alpha=0.3)(x)
x = layers.Dense(256)(x)
x = layers.LeakyReLU(alpha=0.3)(x)
out1 = layers.Dense(128, name="ls_weights")(x)
if mask_kind == 1:
binary_mask = layers.Lambda(mask_layer1, name="lambda_layer", dtype='float64')(out1)
elif mask_kind == 2:
binary_mask = layers.Lambda(mask_layer2, name="lambda_layer", dtype='float64')(out1)
else:
binary_mask = out1
#here the binary mask shape is [?,128]
binary_mask = tf.expand_dims(binary_mask, axis=2) #here the shape is [?,128,1]
binary_mask = tf.squeeze(binary_mask, axis=0) #here the shape is [128,1]
print('binary shape:', binary_mask.shape)
lambda_layer = layers.Lambda(convolution, name="convolutional_layer")(binary_mask)
print(lambda_layer.shape)
model3 = keras.Model(inp_1, lambda_layer, name="2_out_model")
model3.compile(optimizer=tf.keras.optimizers.Adam(learning_rate=decay_rate), # in caso
# rimettere 0.001
loss="mean_squared_error")
plot_model(model3, to_file='model.png', show_shapes=True, show_layer_names=True)
model3.summary()
return model3
我得到这个错误:
ValueError: Input 0 of layer sf_vec is incompatible with the layer: : expected min_ndim=2, found ndim=1. Full shape received: [40000]
我知道这是因为维度之间不匹配,但事实是输出层(张量)的形状应该是 [?,40000] 而我只得到了 [40000 的张量],有什么建议吗?
EDIT 2.0 我没有注意到我的输出已经是 lambda 层,所以在编写模型的方式上我没有收到任何错误,但是从摘要中以这种方式我得到 lambda 形状 (1, 40000) 而通常应该是 (None,40000)。 哪里出错了?
如果你想在 2 个张量之间进行乘法运算,它们需要具有兼容的形状,即相同的形状,或者 broadcastable 的形状。引用 numpy 文档(tensorflow 遵循相同的广播规则):
When operating on two arrays, NumPy compares their shapes element-wise. It starts with the trailing dimensions and works its way forward. Two dimensions are compatible when
- they are equal, or
- one of them is 1
在您的例子中,如果您想使用 tf.multiply
,您需要向向量添加一个维度,使其具有相同的维度数。您可以通过使用 tf.expand_dims
或使用 tf.newaxis
.
一个例子(使用复合物,就像你的问题):
>>> a = tf.complex(tf.random.normal((64,128)),tf.random.normal((64,128)))
>>> a.shape
TensorShape([64, 128])
>>> b = tf.complex(tf.ones(64),tf.ones(64))
>>> b.shape
TensorShape([64])
为了能够使用 tf.multiply
,您需要向 b
添加维度:
>>> b_exp = tf.exand_dims(b, axis=1)
>>> b_exp.shape
TensorShape([64, 1])
>>> tf.multiply(a,b_exp).shape
TensorShape([64, 128])
注意:在 tf.multiply
上执行 tf.reduce_sum
类似于执行 matrix multiplication
对于你的情况,你可能可以做类似于
的事情>>> tf.matmul(b[tf.newaxis,:], a).shape
TensorShape([1, 128])
如果额外的维度困扰您,您可以使用 tf.squeeze
摆脱它。