如何在 Caffe 中输入多个源数据进行部署?

How to input multiple source data for deploy in Caffe?

我正在尝试使用新数据测试我的网络,下面是我在 deploy.prototxt 文件中定义数据的部分

input: "data"
input_dim: 80 
input_dim: 3
input_dim: 227
input_dim: 227
input: "modaldata"
input_dim: 80 
input_dim: 3
input_dim: 227
input_dim: 227
input: "clip_markers"
input_dim: 80 
input_dim: 1
input_dim: 1
input_dim: 1

data是RGB文件,modaldata是同一个文件(比如depth)的模态图像。

使用 python 脚本我转换了两个图像数据,在转换过程中没有错误 "data" 但是我在转换 "modaldata" 时出错:

modalcaffe_in[ix] = transformer_modal.preprocess('modaldata',inputs)
And the error I get is:
..../python/caffe/io.py", line 136, in preprocess
  self.__check_input(in_)
 File "/.../python/caffe/io.py", line 115, in __check_input
  in_, self.inputs))
Exception: modaldata is not one of the net inputs: {'data': (80, 3, 227, 227)}

我刚刚解决了这个问题,这是我的错误。我已经使用相同的模板初始化了转换器,我以为我只是通过每次调用创建它的实例并且 'data' 输入是通用的,我必须创建单独的初始化程序并且它是固定的。

    def initialize_transformer_RGB(image_mean, is_modal):
      shape = (10*8, 3, 227, 227) # shape = (1*16, 3, 227, 227)
      transformerRGB = caffe.io.Transformer({'data': shape})
      channel_mean = np.zeros((3,227,227))
      for channel_index, mean_val in enumerate(image_mean):
        channel_mean[channel_index, ...] = mean_val
      transformerRGB.set_mean('data', channel_mean)
      transformerRGB.set_raw_scale('data', 255)
      transformerRGB.set_channel_swap('data', (2, 1, 0))
      transformerRGB.set_transpose('data', (2, 0, 1))
      transformerRGB.set_is_modal('data', is_modal)
      return transformerRGB

    def initialize_transformer_modal(image_mean, is_modal):
      shape = (10*8, 3, 227, 227) # shape = (1*16, 3, 227, 227)
      transformermodal= caffe.io.Transformer({'modaldata': shape})
      channel_mean = np.zeros((3,227,227))
      for channel_index, mean_val in enumerate(image_mean):
        channel_mean[channel_index, ...] = mean_val
      transformermodal.set_mean('modaldata', channel_mean)
      transformermodal.set_raw_scale('modaldata', 255)
      transformermodal.set_channel_swap('modaldata', (2, 1, 0))
      transformermodal.set_transpose('modaldata', (2, 0, 1))
      transformermodal.set_is_modal('modaldata', is_modal)
      return transformermodal


    ucf_mean_RGB = np.zeros((3,1,1))
    ucf_mean_modal = np.zeros((3,1,1))
    ucf_mean_modal[:,:,:] = 128
    ucf_mean_RGB[0,:,:] = 103.939
    ucf_mean_RGB[1,:,:] = 116.779
    ucf_mean_RGB[2,:,:] = 128.68

    transformer_RGB = initialize_transformer_RGB(ucf_mean_RGB, False)
    transformer_modal = initialize_transformer_modal(ucf_mean_modal,True)