tensorflow 召回率超过 100 的神经网络
Neural Network with tensorflow recall over 100
我正在尝试获取创建模型的所有指标:
def build_rnn_gru_model(tokenizer):
model = tf.keras.Sequential([
tf.keras.layers.Embedding(len(tokenizer.word_index) + 1, 64,input_length=863),
tf.keras.layers.GRU(64, activation='relu', return_sequences=True),
tf.keras.layers.Dense(1, activation='sigmoid')
])
model.summary()
model.compile(loss='binary_crossentropy', optimizer='adam', metrics=['accuracy',f1,precision, recall])
return model
我也使用了 How to get accuracy, F1, precision and recall, for a keras model? 中高度赞成的答案中建议的指标定义,但结果是一样的:
def recall(y_true, y_pred):
true_positives = K.sum(K.round(y_pred) * y_true)
possible_positives = K.sum(y_true)
return true_positives / (possible_positives + K.epsilon())
def precision(y_true, y_pred):
true_positives = K.sum(K.round(K.clip(y_true * y_pred, 0, 1)))
predicted_positives = K.sum(K.round(K.clip(y_pred, 0, 1)))
precision = true_positives / (predicted_positives + K.epsilon())
return precision
def f1(y_true, y_pred):
precision_ = precision(y_true, y_pred)
recall_ = recall(y_true, y_pred)
return 2*((precision_*recall_)/(precision_+recall_+K.epsilon()))
在评估具有 LSTM
或没有循环层的模型时,一切看起来都不错,但是 GRU
的 recal 值非常高:
199/1180 [====>.........................] - ETA: 4:45 - loss: 0.3988 - accuracy: 0.8230 - f1: 1.6155 - precision: 0.8195 - recall: 468.6583
谁能告诉我哪里出了问题?
对于 TF 2 我建议您使用预定义的 metrics,在您的情况下 tf.keras.metrics.Recall
model.compile(loss='binary_crossentropy', optimizer='adam', metrics=[tf.keras.metrics.Recall(), ...])
我建议在您的 GRU 层中设置 return_sequences=False
,因为我认为您正在执行二进制分类任务
我正在尝试获取创建模型的所有指标:
def build_rnn_gru_model(tokenizer):
model = tf.keras.Sequential([
tf.keras.layers.Embedding(len(tokenizer.word_index) + 1, 64,input_length=863),
tf.keras.layers.GRU(64, activation='relu', return_sequences=True),
tf.keras.layers.Dense(1, activation='sigmoid')
])
model.summary()
model.compile(loss='binary_crossentropy', optimizer='adam', metrics=['accuracy',f1,precision, recall])
return model
我也使用了 How to get accuracy, F1, precision and recall, for a keras model? 中高度赞成的答案中建议的指标定义,但结果是一样的:
def recall(y_true, y_pred):
true_positives = K.sum(K.round(y_pred) * y_true)
possible_positives = K.sum(y_true)
return true_positives / (possible_positives + K.epsilon())
def precision(y_true, y_pred):
true_positives = K.sum(K.round(K.clip(y_true * y_pred, 0, 1)))
predicted_positives = K.sum(K.round(K.clip(y_pred, 0, 1)))
precision = true_positives / (predicted_positives + K.epsilon())
return precision
def f1(y_true, y_pred):
precision_ = precision(y_true, y_pred)
recall_ = recall(y_true, y_pred)
return 2*((precision_*recall_)/(precision_+recall_+K.epsilon()))
在评估具有 LSTM
或没有循环层的模型时,一切看起来都不错,但是 GRU
的 recal 值非常高:
199/1180 [====>.........................] - ETA: 4:45 - loss: 0.3988 - accuracy: 0.8230 - f1: 1.6155 - precision: 0.8195 - recall: 468.6583
谁能告诉我哪里出了问题?
对于 TF 2 我建议您使用预定义的 metrics,在您的情况下 tf.keras.metrics.Recall
model.compile(loss='binary_crossentropy', optimizer='adam', metrics=[tf.keras.metrics.Recall(), ...])
我建议在您的 GRU 层中设置 return_sequences=False
,因为我认为您正在执行二进制分类任务