带有变压器的分类模型的keras模型错误
Error in keras model for classification model with transformers
我遵循了这个教程:
https://keras.io/examples/timeseries/timeseries_transformer_classification/
对于带有变压器的分类模型到我的时间序列。
但是,在行中:
x = layers.MultiHeadAttention(
key_dim=head_size, num_heads=num_heads, dropout=dropout
)(x, x)
我收到错误:
{IndexError}tuple index out of range
知道为什么吗?
披露:我来这里是为了赏金,然后我尝试了 Colab,一切正常..
接下来我阅读了评论:“这个问题在当前状态下是一个笑话。无法重现它。”在这一点上我同意。但由于我是 Hans in Luck 并且显然有很多时间拖延,所以我开始 Pycharm 按照 OP 提示:“不,当我粘贴它时我的 pycharm 我得到了上面的错误 "
但这对我也有效,这让我想知道你是否触及了什么,所以我很乐意为你提供一个(n)(未触及的)工作版本..
import numpy as np
def readucr(filename):
data = np.loadtxt(filename, delimiter="\t")
y = data[:, 0]
x = data[:, 1:]
return x, y.astype(int)
root_url = "https://raw.githubusercontent.com/hfawaz/cd-diagram/master/FordA/"
x_train, y_train = readucr(root_url + "FordA_TRAIN.tsv")
x_test, y_test = readucr(root_url + "FordA_TEST.tsv")
x_train = x_train.reshape((x_train.shape[0], x_train.shape[1], 1))
x_test = x_test.reshape((x_test.shape[0], x_test.shape[1], 1))
n_classes = len(np.unique(y_train))
idx = np.random.permutation(len(x_train))
x_train = x_train[idx]
y_train = y_train[idx]
y_train[y_train == -1] = 0
y_test[y_test == -1] = 0
from tensorflow import keras
from tensorflow.keras import layers
def transformer_encoder(inputs, head_size, num_heads, ff_dim, dropout=0):
# Normalization and Attention
x = layers.LayerNormalization(epsilon=1e-6)(inputs)
x = layers.MultiHeadAttention(
key_dim=head_size, num_heads=num_heads, dropout=dropout
)(x, x)
x = layers.Dropout(dropout)(x)
res = x + inputs
# Feed Forward Part
x = layers.LayerNormalization(epsilon=1e-6)(res)
x = layers.Conv1D(filters=ff_dim, kernel_size=1, activation="relu")(x)
x = layers.Dropout(dropout)(x)
x = layers.Conv1D(filters=inputs.shape[-1], kernel_size=1)(x)
return x + res
还要确保我们谈论的是相同的包版本
我使用了 numpy (1.21.2) 和 tensorflow (2.6.0) - 试试这些版本,或者让我知道你是否使用了不同的版本。
我遵循了这个教程: https://keras.io/examples/timeseries/timeseries_transformer_classification/ 对于带有变压器的分类模型到我的时间序列。 但是,在行中:
x = layers.MultiHeadAttention(
key_dim=head_size, num_heads=num_heads, dropout=dropout
)(x, x)
我收到错误:
{IndexError}tuple index out of range
知道为什么吗?
披露:我来这里是为了赏金,然后我尝试了 Colab,一切正常..
接下来我阅读了评论:“这个问题在当前状态下是一个笑话。无法重现它。”在这一点上我同意。但由于我是 Hans in Luck 并且显然有很多时间拖延,所以我开始 Pycharm 按照 OP 提示:“不,当我粘贴它时我的 pycharm 我得到了上面的错误 "
但这对我也有效,这让我想知道你是否触及了什么,所以我很乐意为你提供一个(n)(未触及的)工作版本..
import numpy as np
def readucr(filename):
data = np.loadtxt(filename, delimiter="\t")
y = data[:, 0]
x = data[:, 1:]
return x, y.astype(int)
root_url = "https://raw.githubusercontent.com/hfawaz/cd-diagram/master/FordA/"
x_train, y_train = readucr(root_url + "FordA_TRAIN.tsv")
x_test, y_test = readucr(root_url + "FordA_TEST.tsv")
x_train = x_train.reshape((x_train.shape[0], x_train.shape[1], 1))
x_test = x_test.reshape((x_test.shape[0], x_test.shape[1], 1))
n_classes = len(np.unique(y_train))
idx = np.random.permutation(len(x_train))
x_train = x_train[idx]
y_train = y_train[idx]
y_train[y_train == -1] = 0
y_test[y_test == -1] = 0
from tensorflow import keras
from tensorflow.keras import layers
def transformer_encoder(inputs, head_size, num_heads, ff_dim, dropout=0):
# Normalization and Attention
x = layers.LayerNormalization(epsilon=1e-6)(inputs)
x = layers.MultiHeadAttention(
key_dim=head_size, num_heads=num_heads, dropout=dropout
)(x, x)
x = layers.Dropout(dropout)(x)
res = x + inputs
# Feed Forward Part
x = layers.LayerNormalization(epsilon=1e-6)(res)
x = layers.Conv1D(filters=ff_dim, kernel_size=1, activation="relu")(x)
x = layers.Dropout(dropout)(x)
x = layers.Conv1D(filters=inputs.shape[-1], kernel_size=1)(x)
return x + res
还要确保我们谈论的是相同的包版本
我使用了 numpy (1.21.2) 和 tensorflow (2.6.0) - 试试这些版本,或者让我知道你是否使用了不同的版本。