预训练的 InceptionV3 CNN 的随机结果

Random results from pre-trained InceptionV3 CNN

我正在尝试创建一个 InceptionV3 CNN,它之前已经在 Imagenet 上接受过训练。虽然检查点的创建和加载似乎工作正常,但结果似乎是随机的,因为每次我 运行 脚本,我都会得到不同的结果,即使我没有改变任何东西。从头开始重新创建网络,加载相同的未更改网络并 classified 相同的图像(据我所知,这仍然会导致相同的结果,即使它无法确定图像实际是什么) .

我刚刚注意到,即使我尝试 class在同一次脚本执行中多次验证同一张图像,我最终也会得到一个随机结果。

我这样创建 CNN

from tensorflow.contrib.slim.nets import inception as nn_architecture
from tensorflow.contrib import slim

with slim.arg_scope([slim.conv2d, slim.fully_connected], normalizer_fn=slim.batch_norm,
                    normalizer_params={'updates_collections': None}): ## this is a fix for an issue where the model doesn't fit the checkpoint https://github.com/tensorflow/models/issues/2977
    logits, endpoints = nn_architecture.inception_v3(input,  # input
                                                     1001, #NUM_CLASSES, #num classes
                                                     # num classes #maybe set to 0 or none to ommit logit layer and return input for logit layer instead.
                                                     True,  # is training (dropout = zero if false for eval
                                                     0.8,  # dropout keep rate
                                                     16,  # min depth
                                                     1.0,  # depth multiplayer
                                                     layers_lib.softmax,  # prediction function
                                                     True,  # spatial squeeze
                                                     tf.AUTO_REUSE,
                                                     # reuse, use get variable to get variables directly... probably
                                                     'InceptionV3')  # scope

然后我像这样加载 the imagenet trained checkpoint

saver = tf.train.Saver()
saver.restore(sess, CHECKPOINT_PATH)

然后我通过 class 验证这张图片

来验证它是否正常工作

我将其从原始分辨率压缩为网络输入所需的 299x299

from skimage import io

car = io.imread("data/car.jpg")
car_scaled = zoom(car, [299 / car.shape[0], 299 / car.shape[1], 1])

car_cnnable = np.array([car_scaled])

然后我尝试 class 化图像并打印 class 该图像最有可能属于哪个图像以及可能性有多大。

predictions = sess.run(logits, feed_dict={images: car_cnnable})
predictions = np.squeeze(predictions) #shape (1, 1001) to shape (1001)  

print(np.argmax(predictions))
print(predictions[np.argmax(predictions)])

class 是(或似乎是)随机的,并且可能性也不同。 我的最后几次处决是:

Class - likelihood 
899 - 0.98858
660 - 0.887204
734 - 0.904047
675 - 0.886952

这是我的完整代码:https://gist.github.com/Syzygy2048/ddb8602652b547a71316ee0febfddbef

由于我将 isTraining 设置为 true,因此每次使用网络时它都会应用丢失率。我的印象是这只发生在反向传播过程中。

要让它正常工作,代码应该是

logits, endpoints = nn_architecture.inception_v3(input,  # input
                                                 1001, #NUM_CLASSES, #num classes
                                                 # num classes #maybe set to 0 or none to ommit logit layer and return input for logit layer instead.
                                                 False,  # is training (dropout = zero if false for eval
                                                 0.8,  # dropout keep rate
                                                 16,  # min depth
                                                 1.0,  # depth multiplayer
                                                 layers_lib.softmax,  # prediction function
                                                 True,  # spatial squeeze
                                                 tf.AUTO_REUSE,
                                                 # reuse, use get variable to get variables directly... probably
                                                 'InceptionV3')  # scope