google 云引擎:输入实例不是 JSON 格式

google cloud engine: Input instances are not in JSON format

我正在使用 Google Cloud ML Engine 进行在线预测。我编写了 Tensorflow Estimator API 代码,我从 tf-estimator-tutorials 存储库中引用了这些代码。要进行在线预测,我们需要将模型导出到原型缓冲区文件 (.pb) 文件中。为了将输入函数提供给模型,我在 serve_input_fn() 函数中编写了以下代码。

SERVING_HEADER = ['renancy','freq','monetary']
SERVING_HEADER_DEFAULTS = [[0.0],[0.0],[0.0]]

#shape=(?,), dtype=string
rows_string_tensor = tf.placeholder(dtype=tf.string,
                                    shape=[None],
                                    name="csv_rows")

#feeding rows_string_tensor value in the dictionary
receive_tensor = {'csv_rows':rows_string_tensor}

#shape=(?,1), dtype=string
row_columns = tf.expand_dims(rows_string_tensor, -1)

#<tf.Tensor 'DecodeCSV:0' shape=(?,1) dtype=float32>,<tf.Tensor 'DecodeCSV:1' shape=(?,1) dtype=float32>
#<tf.Tensor 'DecodeCSV:2' shape=(?,1) dtype=float32>
columns = tf.decode_csv(row_columns, record_defaults=SERVING_HEADER_DEFAULTS)

#<tf.Tensor 'Expand_dims_1:0' shape=(?,1,1) dtype=float32>,<tf.Tensor 'Expand_dims_2:0' shape=(?,1,1) dtype=float32>
#<tf.Tensor 'Expand_dims_3:0' shape=(?,1,1) dtype=float32>
columns = [tf.expand_dims(tensor, -1) for tensor in columns]

#{"renancy":<tf.Tensor 'Expand_dims_1:0' shape=(?,1,1) dtype=float32>,
#"freq":<tf.Tensor 'Expand_dims_2:0' shape=(?,1,1) dtype=float32> 
#"monetary":<tf.Tensor 'Expand_dims_1:0' shape=(?,1,1) dtype=float32>}
features = dict(zip(SERVING_HEADER, columns))


#InputFnOps(features=None, labels=None, default_inputs={'csv_rows':<tf.Tensor 'csv_rows:0' shape=(?,) dtype=string>})
return tf.contrib.learn.InputFnOps(
    process_features(features),
    None,
    receive_tensor
)

我已经在云 ML 中部署了模型。现在我必须在线prediction.To这样做 gcloud ml-engine predict --model-dir=<model_name> --version <version> --json-instances=test.json --project <project_name>

当我运行上面的命令时,显示如下错误

{ "error": "Prediction failed: Error during model execution: AbortionError(code=StatusCode.INVALID_ARGUMENT, details=\"NodeDef mentions attr 'select_cols' not in Op output:; attr=OUT_TYPE:list(type),min=1,allowed=[DT_FLOAT, DT_DOUBLE, DT_INT32, DT_INT64, DT_STRING]; attr=field_delim:string,default=\",\"; attr=use_quote_delim:bool,default=true; attr=na_value:string,default=\"\">; NodeDef: DecodeCSV = DecodeCSV[OUT_TYPE=[DT_FLOAT, DT_FLOAT, DT_FLOAT], _output_shapes=[[?,1], [?,1], [?,1]], field_delim=\",\", na_value=\"\", select_cols=[], use_quote_delim=true, _device=\"/job:localhost/replica:0/task:0/device:CPU:0\"](ExpandDims, DecodeCSV/record_defaults_0, DecodeCSV/record_defaults_0, DecodeCSV/record_defaults_0). (Check whether your GraphDef-interpreting binary is up to date with your GraphDef-generating binary.).\n\t [[Node: DecodeCSV = DecodeCSV[OUT_TYPE=[DT_FLOAT, DT_FLOAT, DT_FLOAT], _output_shapes=[[?,1], [?,1], [?,1]], field_delim=\",\", na_value=\"\", select_cols=[], use_quote_delim=true, _device=\"/job:localhost/replica:0/task:0/device:CPU:0\"](ExpandDims, DecodeCSV/record_de...TRUNCATED\")" }

我知道 tf.contrib.learn.InputFnOps 已被弃用,但出于好奇,我想知道是否有任何方法可以进行预测。我的 test.json 数据是这样的

       {"csv_rows":"7.0,8.0,7.0"}
       {"csv_rows":"5.0,6.0,4.0"}

我已经使用这些数据训练了模型 Train dataset

您的 test.json 每行必须恰好有一个实例。在您的代码中,您将 csv_rows 作为字符串读取并将其解码为 CSV,因此这就是您的代码在 test.json:

中所期望的
{"csv_rows":"7.0,8.0,7.0"}
{"csv_rows":"5.0,6.0,4.0"}

如果您希望能够提供:

{"renancy":"9.0","freq":"3.0","monetary":"5.0"}
{"renancy":"5.0","freq":"6.0","monetary":"4.0"}

然后,您的服务代码必须更改为:

def serving_input_fn():
    feature_placeholders = {
        'renancy': tf.placeholder(tf.float32, [None]),
        'freq': tf.placeholder(tf.float32, [None]),
        'monetary': tf.placeholder(tf.float32, [None])
    }
    features = features_placeholders
    return tf.estimator.export.ServingInputReceiver(features, feature_placeholders)