Docker 上的 Deepanimebot:权重文件和模型中的层数不匹配
Deepanimebot on Docker: Mismatch between number of layers in weight file and model
我加载 DeepClassificationBot (classificationbot/deploy-base:latest
) docker 图像并使用自定义模型(数据、类别和模型权重)对其进行修改。看起来只需要模型和权重。
该模型在本地运行良好,但在 docker 容器内将其作为 webapp 部署时,它给出:
Exception: You are trying to load a weight file containing 45 layers
into a model with 34 layers.
看起来模型(Keras+HDF5)和权重文件不匹配。
这里似乎是它们的位置(根文件夹只包含一些 .py
脚本)。我已经在复制那些包含 HDF5 模型和权重的文件夹:
data # This is where the extracted and preprocessed data are saved.
-categories.p
-data.hdf5
-README.md
pre_trained_weights # This is where the trained model weights are saved.
-latest_model_weights.hdf5
-model_weights.hdf5
-README.md
问题可能出在 Dockerfile 中(一些必需的文件未被复制):
~/DeepClassificationBot-master/dockerfiles/webapp/Dockerfile
:
FROM classificationbot/deploy-base:latest
COPY ./requirements-webapp.txt /tmp/
RUN pip install -r /tmp/requirements-webapp.txt
# OVERWRITE DEMO DATA WITH CUSTOM FOLDERS
COPY ./data /opt/bot/data
COPY ./pre_trained_weights /opt/bot/pre_trained_weights
# I ALSO TRIED TO OVERWRITE THIS FOLDER / FILES - SAME EXCEPTION
# COPY ./deepanimebot /opt/bot/deepanimebot
# COPY ./data.py /opt/bot/data.py
# COPY ./model.py /opt/bot/model.py
# COPY ./deploy.py /opt/bot/deploy.py
WORKDIR /opt/bot
ENTRYPOINT ["/usr/local/bin/gunicorn", "-b", ":80", "deepanimebot.wsgi:app"]
控制台:
$ docker build -t classificationbot/webapp:latest -f dockerfiles/webapp/Dockerfile .
...
Successfully built e1159596c19f
$ docker run e1159596c19f
[2016-11-06 14:39:38 +0000] [1] [INFO] Starting gunicorn 19.6.0
[2016-11-06 14:39:38 +0000] [1] [INFO] Listening at: http://0.0.0.0:80 (1)
[2016-11-06 14:39:38 +0000] [1] [INFO] Using worker: sync
[2016-11-06 14:39:38 +0000] [9] [INFO] Booting worker with pid: 9
libdc1394 error: Failed to initialize libdc1394
Using Theano backend.
____________________________________________________________________________________________________
Layer (type) Output Shape Param # Connected to
====================================================================================================
convolution2d_1 (Convolution2D) (None, 64, 126, 126)1792 convolution2d_input_1[0][0]
____________________________________________________________________________________________________
zeropadding2d_1 (ZeroPadding2D) (None, 64, 128, 128)0 convolution2d_1[0][0]
____________________________________________________________________________________________________
convolution2d_2 (Convolution2D) (None, 64, 126, 126)36928 zeropadding2d_1[0][0]
____________________________________________________________________________________________________
maxpooling2d_1 (MaxPooling2D) (None, 64, 63, 63) 0 convolution2d_2[0][0]
____________________________________________________________________________________________________
batchnormalization_1 (BatchNormaliz(None, 64, 63, 63) 126 maxpooling2d_1[0][0]
____________________________________________________________________________________________________
zeropadding2d_2 (ZeroPadding2D) (None, 64, 65, 65) 0 batchnormalization_1[0][0]
____________________________________________________________________________________________________
convolution2d_3 (Convolution2D) (None, 128, 63, 63) 73856 zeropadding2d_2[0][0]
____________________________________________________________________________________________________
convolution2d_4 (Convolution2D) (None, 128, 63, 63) 16512 convolution2d_3[0][0]
____________________________________________________________________________________________________
maxpooling2d_2 (MaxPooling2D) (None, 128, 31, 31) 0 convolution2d_4[0][0]
____________________________________________________________________________________________________
batchnormalization_2 (BatchNormaliz(None, 128, 31, 31) 62 maxpooling2d_2[0][0]
____________________________________________________________________________________________________
zeropadding2d_3 (ZeroPadding2D) (None, 128, 33, 33) 0 batchnormalization_2[0][0]
____________________________________________________________________________________________________
convolution2d_5 (Convolution2D) (None, 256, 31, 31) 295168 zeropadding2d_3[0][0]
____________________________________________________________________________________________________
zeropadding2d_4 (ZeroPadding2D) (None, 256, 33, 33) 0 convolution2d_5[0][0]
____________________________________________________________________________________________________
convolution2d_6 (Convolution2D) (None, 256, 31, 31) 590080 zeropadding2d_4[0][0]
____________________________________________________________________________________________________
convolution2d_7 (Convolution2D) (None, 256, 31, 31) 65792 convolution2d_6[0][0]
____________________________________________________________________________________________________
maxpooling2d_3 (MaxPooling2D) (None, 256, 15, 15) 0 convolution2d_7[0][0]
____________________________________________________________________________________________________
batchnormalization_3 (BatchNormaliz(None, 256, 15, 15) 30 maxpooling2d_3[0][0]
____________________________________________________________________________________________________
zeropadding2d_5 (ZeroPadding2D) (None, 256, 17, 17) 0 batchnormalization_3[0][0]
____________________________________________________________________________________________________
convolution2d_8 (Convolution2D) (None, 512, 15, [2016-11-06 14:39:43 +0000] [9] [ERROR] Exception in worker process
Traceback (most recent call last):
File "/usr/local/lib/python2.7/dist-packages/gunicorn/arbiter.py", line 557, in spawn_worker
worker.init_process()
File "/usr/local/lib/python2.7/dist-packages/gunicorn/workers/base.py", line 126, in init_process
self.load_wsgi()
File "/usr/local/lib/python2.7/dist-packages/gunicorn/workers/base.py", line 136, in load_wsgi
self.wsgi = self.app.wsgi()
File "/usr/local/lib/python2.7/dist-packages/gunicorn/app/base.py", line 67, in wsgi
self.callable = self.load()
File "/usr/local/lib/python2.7/dist-packages/gunicorn/app/wsgiapp.py", line 65, in load
return self.load_wsgiapp()
File "/usr/local/lib/python2.7/dist-packages/gunicorn/app/wsgiapp.py", line 52, in load_wsgiapp
return util.import_app(self.app_uri)
File "/usr/local/lib/python2.7/dist-packages/gunicorn/util.py", line 357, in import_app
__import__(module)
File "/opt/bot/deepanimebot/wsgi.py", line 5, in <module>
app = create_app()
File "/opt/bot/deepanimebot/webapp.py", line 50, in create_app
app.config['MODEL_NAME']))
File "/opt/bot/deepanimebot/classifiers.py", line 50, in __init__
model_name=model_name)
File "/opt/bot/deploy.py", line 26, in load_model
model.load_weights("pre_trained_weights/latest_model_weights.hdf5")
File "/usr/local/lib/python2.7/dist-packages/keras/engine/topology.py", line 2326, in load_weights
str(len(flattened_layers)) + ' layers.')
Exception: You are trying to load a weight file containing 45 layers into a model with 34 layers.
[2016-11-06 14:39:43 +0000] [9] [INFO] Worker exiting (pid: 9)
[2016-11-06 14:39:43 +0000] [1] [INFO] Shutting down: Master
[2016-11-06 14:39:43 +0000] [1] [INFO] Reason: Worker failed to boot.
我不得不对 dockerfiles/webapp/Dockerfile
、deepanimebot/webapp.py
进行更改。这是因为 deepanimebot webapp 配置为使用与 deepclassificationbot (model
) 不同的模型 (deep_anime_model
),并且 data/data.hdf5
的路径在 deepanimebot
模块中是错误的。以下是需要最多更改的文件:
~/DeepClassificationBot-master/dockerfiles/webapp/Dockerfile
:
FROM classificationbot/deploy-base:latest
COPY ./requirements-webapp.txt /tmp/
RUN pip install -r /tmp/requirements-webapp.txt
COPY ./data /opt/bot/data
COPY ./pre_trained_weights /opt/bot/pre_trained_weights
COPY ./deepanimebot/webapp.py /opt/bot/deepanimebot/webapp.py
WORKDIR /opt/bot
ENTRYPOINT ["/usr/local/bin/gunicorn", "-b", ":80", "deepanimebot.wsgi:app"]
~/DeepClassificationBot-master/deepanimebot/webapp.py
app.config.setdefault('DATASET_PATH', 'data/data.hdf5')
app.config.setdefault('INPUT_SHAPE', 256)
app.config.setdefault('MODEL_NAME', 'model')
我加载 DeepClassificationBot (classificationbot/deploy-base:latest
) docker 图像并使用自定义模型(数据、类别和模型权重)对其进行修改。看起来只需要模型和权重。
该模型在本地运行良好,但在 docker 容器内将其作为 webapp 部署时,它给出:
Exception: You are trying to load a weight file containing 45 layers into a model with 34 layers.
看起来模型(Keras+HDF5)和权重文件不匹配。
这里似乎是它们的位置(根文件夹只包含一些 .py
脚本)。我已经在复制那些包含 HDF5 模型和权重的文件夹:
data # This is where the extracted and preprocessed data are saved.
-categories.p
-data.hdf5
-README.md
pre_trained_weights # This is where the trained model weights are saved.
-latest_model_weights.hdf5
-model_weights.hdf5
-README.md
问题可能出在 Dockerfile 中(一些必需的文件未被复制):
~/DeepClassificationBot-master/dockerfiles/webapp/Dockerfile
:
FROM classificationbot/deploy-base:latest
COPY ./requirements-webapp.txt /tmp/
RUN pip install -r /tmp/requirements-webapp.txt
# OVERWRITE DEMO DATA WITH CUSTOM FOLDERS
COPY ./data /opt/bot/data
COPY ./pre_trained_weights /opt/bot/pre_trained_weights
# I ALSO TRIED TO OVERWRITE THIS FOLDER / FILES - SAME EXCEPTION
# COPY ./deepanimebot /opt/bot/deepanimebot
# COPY ./data.py /opt/bot/data.py
# COPY ./model.py /opt/bot/model.py
# COPY ./deploy.py /opt/bot/deploy.py
WORKDIR /opt/bot
ENTRYPOINT ["/usr/local/bin/gunicorn", "-b", ":80", "deepanimebot.wsgi:app"]
控制台:
$ docker build -t classificationbot/webapp:latest -f dockerfiles/webapp/Dockerfile .
...
Successfully built e1159596c19f
$ docker run e1159596c19f
[2016-11-06 14:39:38 +0000] [1] [INFO] Starting gunicorn 19.6.0
[2016-11-06 14:39:38 +0000] [1] [INFO] Listening at: http://0.0.0.0:80 (1)
[2016-11-06 14:39:38 +0000] [1] [INFO] Using worker: sync
[2016-11-06 14:39:38 +0000] [9] [INFO] Booting worker with pid: 9
libdc1394 error: Failed to initialize libdc1394
Using Theano backend.
____________________________________________________________________________________________________
Layer (type) Output Shape Param # Connected to
====================================================================================================
convolution2d_1 (Convolution2D) (None, 64, 126, 126)1792 convolution2d_input_1[0][0]
____________________________________________________________________________________________________
zeropadding2d_1 (ZeroPadding2D) (None, 64, 128, 128)0 convolution2d_1[0][0]
____________________________________________________________________________________________________
convolution2d_2 (Convolution2D) (None, 64, 126, 126)36928 zeropadding2d_1[0][0]
____________________________________________________________________________________________________
maxpooling2d_1 (MaxPooling2D) (None, 64, 63, 63) 0 convolution2d_2[0][0]
____________________________________________________________________________________________________
batchnormalization_1 (BatchNormaliz(None, 64, 63, 63) 126 maxpooling2d_1[0][0]
____________________________________________________________________________________________________
zeropadding2d_2 (ZeroPadding2D) (None, 64, 65, 65) 0 batchnormalization_1[0][0]
____________________________________________________________________________________________________
convolution2d_3 (Convolution2D) (None, 128, 63, 63) 73856 zeropadding2d_2[0][0]
____________________________________________________________________________________________________
convolution2d_4 (Convolution2D) (None, 128, 63, 63) 16512 convolution2d_3[0][0]
____________________________________________________________________________________________________
maxpooling2d_2 (MaxPooling2D) (None, 128, 31, 31) 0 convolution2d_4[0][0]
____________________________________________________________________________________________________
batchnormalization_2 (BatchNormaliz(None, 128, 31, 31) 62 maxpooling2d_2[0][0]
____________________________________________________________________________________________________
zeropadding2d_3 (ZeroPadding2D) (None, 128, 33, 33) 0 batchnormalization_2[0][0]
____________________________________________________________________________________________________
convolution2d_5 (Convolution2D) (None, 256, 31, 31) 295168 zeropadding2d_3[0][0]
____________________________________________________________________________________________________
zeropadding2d_4 (ZeroPadding2D) (None, 256, 33, 33) 0 convolution2d_5[0][0]
____________________________________________________________________________________________________
convolution2d_6 (Convolution2D) (None, 256, 31, 31) 590080 zeropadding2d_4[0][0]
____________________________________________________________________________________________________
convolution2d_7 (Convolution2D) (None, 256, 31, 31) 65792 convolution2d_6[0][0]
____________________________________________________________________________________________________
maxpooling2d_3 (MaxPooling2D) (None, 256, 15, 15) 0 convolution2d_7[0][0]
____________________________________________________________________________________________________
batchnormalization_3 (BatchNormaliz(None, 256, 15, 15) 30 maxpooling2d_3[0][0]
____________________________________________________________________________________________________
zeropadding2d_5 (ZeroPadding2D) (None, 256, 17, 17) 0 batchnormalization_3[0][0]
____________________________________________________________________________________________________
convolution2d_8 (Convolution2D) (None, 512, 15, [2016-11-06 14:39:43 +0000] [9] [ERROR] Exception in worker process
Traceback (most recent call last):
File "/usr/local/lib/python2.7/dist-packages/gunicorn/arbiter.py", line 557, in spawn_worker
worker.init_process()
File "/usr/local/lib/python2.7/dist-packages/gunicorn/workers/base.py", line 126, in init_process
self.load_wsgi()
File "/usr/local/lib/python2.7/dist-packages/gunicorn/workers/base.py", line 136, in load_wsgi
self.wsgi = self.app.wsgi()
File "/usr/local/lib/python2.7/dist-packages/gunicorn/app/base.py", line 67, in wsgi
self.callable = self.load()
File "/usr/local/lib/python2.7/dist-packages/gunicorn/app/wsgiapp.py", line 65, in load
return self.load_wsgiapp()
File "/usr/local/lib/python2.7/dist-packages/gunicorn/app/wsgiapp.py", line 52, in load_wsgiapp
return util.import_app(self.app_uri)
File "/usr/local/lib/python2.7/dist-packages/gunicorn/util.py", line 357, in import_app
__import__(module)
File "/opt/bot/deepanimebot/wsgi.py", line 5, in <module>
app = create_app()
File "/opt/bot/deepanimebot/webapp.py", line 50, in create_app
app.config['MODEL_NAME']))
File "/opt/bot/deepanimebot/classifiers.py", line 50, in __init__
model_name=model_name)
File "/opt/bot/deploy.py", line 26, in load_model
model.load_weights("pre_trained_weights/latest_model_weights.hdf5")
File "/usr/local/lib/python2.7/dist-packages/keras/engine/topology.py", line 2326, in load_weights
str(len(flattened_layers)) + ' layers.')
Exception: You are trying to load a weight file containing 45 layers into a model with 34 layers.
[2016-11-06 14:39:43 +0000] [9] [INFO] Worker exiting (pid: 9)
[2016-11-06 14:39:43 +0000] [1] [INFO] Shutting down: Master
[2016-11-06 14:39:43 +0000] [1] [INFO] Reason: Worker failed to boot.
我不得不对 dockerfiles/webapp/Dockerfile
、deepanimebot/webapp.py
进行更改。这是因为 deepanimebot webapp 配置为使用与 deepclassificationbot (model
) 不同的模型 (deep_anime_model
),并且 data/data.hdf5
的路径在 deepanimebot
模块中是错误的。以下是需要最多更改的文件:
~/DeepClassificationBot-master/dockerfiles/webapp/Dockerfile
:
FROM classificationbot/deploy-base:latest
COPY ./requirements-webapp.txt /tmp/
RUN pip install -r /tmp/requirements-webapp.txt
COPY ./data /opt/bot/data
COPY ./pre_trained_weights /opt/bot/pre_trained_weights
COPY ./deepanimebot/webapp.py /opt/bot/deepanimebot/webapp.py
WORKDIR /opt/bot
ENTRYPOINT ["/usr/local/bin/gunicorn", "-b", ":80", "deepanimebot.wsgi:app"]
~/DeepClassificationBot-master/deepanimebot/webapp.py
app.config.setdefault('DATASET_PATH', 'data/data.hdf5')
app.config.setdefault('INPUT_SHAPE', 256)
app.config.setdefault('MODEL_NAME', 'model')