Google Colab 上的 Tensorflow-Keras 再现性问题

Tensorflow-Keras reproducibility problem on Google Colab

我在 Google Colab 上有一个 运行 的简单代码(我使用 CPU 模式):

import numpy as np
import pandas as pd

## LOAD DATASET

datatrain = pd.read_csv("gdrive/My Drive/iris_train.csv").values
xtrain = datatrain[:,:-1]
ytrain = datatrain[:,-1]

datatest = pd.read_csv("gdrive/My Drive/iris_test.csv").values
xtest = datatest[:,:-1]
ytest = datatest[:,-1]

import tensorflow as tf
from tensorflow.keras.layers import Dense, Activation
from tensorflow.keras.utils import to_categorical

## SET ALL SEED

import os
os.environ['PYTHONHASHSEED']=str(66)

import random
random.seed(66)

np.random.seed(66)
tf.set_random_seed(66)

from tensorflow.keras import backend as K
session_conf = tf.ConfigProto(intra_op_parallelism_threads=1, inter_op_parallelism_threads=1)
sess = tf.Session(graph=tf.get_default_graph(), config=session_conf)
K.set_session(sess)

## MAIN PROGRAM

ycat = to_categorical(ytrain) 

# build model
model = tf.keras.Sequential()
model.add(Dense(10, input_shape=(4,)))
model.add(Activation("sigmoid"))
model.add(Dense(3))
model.add(Activation("softmax"))

#choose optimizer and loss function
model.compile(loss='categorical_crossentropy', optimizer='sgd', metrics=['accuracy'])

# train
model.fit(xtrain, ycat, epochs=15, batch_size=32)

#get prediction
classes = model.predict_classes(xtest)

#get accuration
accuration = np.sum(classes == ytest)/len(ytest) * 100

我已经阅读了此处创建可重复代码的设置 Reproducible results using Keras with TensorFlow backend 并且我将所有代码 放在同一个单元格中 。但是每次我 运行 那个单元格(运行 使用 shift + enter 的单元格)时,结果(例如损失)总是不同的。

在我的例子中,上面代码的结果可以重现,只要:

  1. 我 运行 使用 "runtime" > "restart and run all" 或
  2. 我将该代码放在一个文件中,然后 运行 使用命令行 (python3 file.py)

有什么我想念的,可以在不重新启动 运行 时间的情况下使结果重现吗?

您还应该在 Dense 层中修复 kernel_initializer 的种子。所以,你的模型将是这样的:

model = tf.keras.Sequential()
model.add(Dense(10, kernel_initializer=keras.initializers.glorot_uniform(seed=66), input_shape=(4,)))
model.add(Activation("sigmoid"))
model.add(Dense(3, kernel_initializer=keras.initializers.glorot_uniform(seed=66)))
model.add(Activation("softmax"))

我尝试使用 Keras 和 Google Colab (CPU) 使 Tensorflow 2.0 可重现地工作,Iris 数据集处理的版本类似于@malioboro 上面描述的版本。这似乎有效 - 可能有用:

# Install TensorFlow
try:
  # %tensorflow_version only exists in Colab.
  %tensorflow_version 2.x
except Exception:
  pass

# Setup repro section from Keras FAQ with TF1 to TF2 adjustments

import numpy as np
import tensorflow as tf
import random as rn

# The below is necessary for starting Numpy generated random numbers
# in a well-defined initial state.

np.random.seed(42)

# The below is necessary for starting core Python generated random numbers
# in a well-defined state.

rn.seed(12345)

# Force TensorFlow to use single thread.
# Multiple threads are a potential source of non-reproducible results.
# For further details, see: 

session_conf = tf.compat.v1.ConfigProto(intra_op_parallelism_threads=1,
                                        inter_op_parallelism_threads=1)

# The below tf.set_random_seed() will make random number generation
# in the TensorFlow backend have a well-defined initial state.
# For further details, see:
# https://www.tensorflow.org/api_docs/python/tf/set_random_seed

tf.compat.v1.set_random_seed(1234)

sess = tf.compat.v1.Session(graph=tf.compat.v1.get_default_graph(), config=session_conf)
tf.compat.v1.keras.backend.set_session(sess)

# Rest of code follows ...
# Some adopted from: https://janakiev.com/notebooks/keras-iris/
# Some adopted from the question.
#
# Load Data
from sklearn.datasets import load_iris
from sklearn.model_selection import train_test_split
from sklearn.preprocessing import OneHotEncoder, StandardScaler

iris = load_iris()
X = iris['data']
y = iris['target']
names = iris['target_names']
feature_names = iris['feature_names']

# One hot encoding
enc = OneHotEncoder()
Y = enc.fit_transform(y[:, np.newaxis]).toarray()

# Scale data to have mean 0 and variance 1 
# which is importance for convergence of the neural network
scaler = StandardScaler()
X_scaled = scaler.fit_transform(X)

# Split the data set into training and testing
X_train, X_test, Y_train, Y_test = train_test_split(
    X_scaled, Y, test_size=0.5, random_state=2)

n_features = X.shape[1]
n_classes = Y.shape[1]

## MAIN PROGRAM
from tensorflow.keras.layers import Dense, Activation 

# build model
model = tf.keras.Sequential()
model.add(Dense(10, input_shape=(4,)))
model.add(Activation("sigmoid"))
model.add(Dense(3))
model.add(Activation("softmax"))

#choose optimizer and loss function
model.compile(loss='categorical_crossentropy', optimizer='sgd', metrics=['accuracy'])

# train
model.fit(X_train, Y_train, epochs=20, batch_size=32)

#get prediction
classes = model.predict_classes(X_test)