Tensorflow 对象检测服务
Tensorflow object detection serving
我正在使用 tensorflow 对象检测 api. The problem with this api is that it exports frozen graph for inference. I can't use that graph for serving. So, as a work around I followed the tutorial here。但是当我尝试导出图表时出现以下错误:
InvalidArgumentError (see above for traceback): Restoring from
checkpoint failed. This is most likely due to a mismatch between the
current graph and the graph from the checkpoint. Please ensure that
you have not altered the graph expected based on the checkpoint.
Original error:
Assign requires shapes of both tensors to match. lhs shape= [1024,4]
rhs shape= [1024,8]
[[node save/Assign_258 (defined at
/home/deploy/models/research/object_detection/exporter.py:67) =
Assign[T=DT_FLOAT,
_class=["loc:@SecondStageBoxPredictor/BoxEncodingPredictor/weights"], use_locking=true, validate_shape=true,
_device="/job:localhost/replica:0/task:0/device:GPU:0"](SecondStageBoxPredictor/BoxEncodingPredictor/weights,
save/RestoreV2/_517)]] [[{{node save/RestoreV2/_522}} =
_SendT=DT_FLOAT, client_terminated=false, recv_device="/job:localhost/replica:0/task:0/device:GPU:0",
send_device="/job:localhost/replica:0/task:0/device:CPU:0",
send_device_incarnation=1, tensor_name="edge_527_save/RestoreV2",
_device="/job:localhost/replica:0/task:0/device:CPU:0"]]
错误表明图表中存在不匹配。一个可能的原因可能是我正在使用预训练图进行训练,它可能有 4 个分类,而我的模型有 8 个分类。 (因此形状不匹配)。 deeplab model 和他们的解决方案也有类似的问题
具体模型是用 --initialize_last_layer=False
和 --last_layers_contain_logits_only=False
参数开始训练。但是张量流对象检测没有那个参数。那么,我应该如何进行?此外,还有其他方法可以提供张量流对象检测 api 吗?
我的配置文件如下所示:
model {
faster_rcnn {
num_classes: 1
image_resizer {
fixed_shape_resizer {
height: 1000
width: 1000
resize_method: AREA
}
}
feature_extractor {
type: "faster_rcnn_inception_v2"
first_stage_features_stride: 16
}
first_stage_anchor_generator {
grid_anchor_generator {
height_stride: 16
width_stride: 16
scales: 0.25
scales: 0.5
scales: 1.0
scales: 2.0
aspect_ratios: 0.5
aspect_ratios: 1.0
aspect_ratios: 2.0
}
}
first_stage_box_predictor_conv_hyperparams {
op: CONV
regularizer {
l2_regularizer {
weight: 0.0
}
}
initializer {
truncated_normal_initializer {
stddev: 0.00999999977648
}
}
}
first_stage_nms_score_threshold: 0.0
first_stage_nms_iou_threshold: 0.699999988079
first_stage_max_proposals: 300
first_stage_localization_loss_weight: 2.0
first_stage_objectness_loss_weight: 1.0
initial_crop_size: 14
maxpool_kernel_size: 2
maxpool_stride: 2
second_stage_box_predictor {
mask_rcnn_box_predictor {
fc_hyperparams {
op: FC
regularizer {
l2_regularizer {
weight: 0.0
}
}
initializer {
variance_scaling_initializer {
factor: 1.0
uniform: true
mode: FAN_AVG
}
}
}
use_dropout: false
dropout_keep_probability: 1.0
}
}
second_stage_post_processing {
batch_non_max_suppression {
score_threshold: 0.0
iou_threshold: 0.600000023842
max_detections_per_class: 100
max_total_detections: 300
}
score_converter: SOFTMAX
}
second_stage_localization_loss_weight: 2.0
second_stage_classification_loss_weight: 1.0
}
}
train_config {
batch_size: 8
data_augmentation_options {
random_horizontal_flip {
}
}
optimizer {
adam_optimizer {
learning_rate {
manual_step_learning_rate {
initial_learning_rate: 0.00010000000475
schedule {
step: 40000
learning_rate: 3.00000010611e-05
}
}
}
}
use_moving_average: true
}
gradient_clipping_by_norm: 10.0
fine_tune_checkpoint: "/home/deploy/models/research/object_detection/faster_rcnn_inception_v2_coco_2018_01_28/model.ckpt"
from_detection_checkpoint: true
num_steps: 60000
max_number_of_boxes: 100
}
train_input_reader {
label_map_path: "/home/deploy/models/research/object_detection/Training_carrot_060219/carrot_identify.pbtxt"
tf_record_input_reader {
input_path: "/home/deploy/models/research/object_detection/Training_carrot_060219/train.record"
}
}
eval_config {
num_visualizations: 100
num_examples: 135
eval_interval_secs: 60
use_moving_averages: false
}
eval_input_reader {
label_map_path: "/home/deploy/models/research/object_detection/Training_carrot_060219/carrot_identify.pbtxt"
shuffle: true
num_epochs: 1
num_readers: 1
tf_record_input_reader {
input_path: "/home/deploy/models/research/object_detection/Training_carrot_060219/test.record"
}
sample_1_of_n_examples: 1
}
为 tf 服务导出模型时,配置文件和检查点文件应相互对应。
问题是在导出自定义训练模型时,您使用的是带有新检查点文件的旧配置文件。
我正在使用 tensorflow 对象检测 api. The problem with this api is that it exports frozen graph for inference. I can't use that graph for serving. So, as a work around I followed the tutorial here。但是当我尝试导出图表时出现以下错误:
InvalidArgumentError (see above for traceback): Restoring from checkpoint failed. This is most likely due to a mismatch between the current graph and the graph from the checkpoint. Please ensure that you have not altered the graph expected based on the checkpoint. Original error:
Assign requires shapes of both tensors to match. lhs shape= [1024,4] rhs shape= [1024,8]
[[node save/Assign_258 (defined at /home/deploy/models/research/object_detection/exporter.py:67) = Assign[T=DT_FLOAT, _class=["loc:@SecondStageBoxPredictor/BoxEncodingPredictor/weights"], use_locking=true, validate_shape=true, _device="/job:localhost/replica:0/task:0/device:GPU:0"](SecondStageBoxPredictor/BoxEncodingPredictor/weights, save/RestoreV2/_517)]] [[{{node save/RestoreV2/_522}} = _SendT=DT_FLOAT, client_terminated=false, recv_device="/job:localhost/replica:0/task:0/device:GPU:0", send_device="/job:localhost/replica:0/task:0/device:CPU:0", send_device_incarnation=1, tensor_name="edge_527_save/RestoreV2", _device="/job:localhost/replica:0/task:0/device:CPU:0"]]
错误表明图表中存在不匹配。一个可能的原因可能是我正在使用预训练图进行训练,它可能有 4 个分类,而我的模型有 8 个分类。 (因此形状不匹配)。 deeplab model 和他们的解决方案也有类似的问题
具体模型是用 --initialize_last_layer=False
和 --last_layers_contain_logits_only=False
参数开始训练。但是张量流对象检测没有那个参数。那么,我应该如何进行?此外,还有其他方法可以提供张量流对象检测 api 吗?
我的配置文件如下所示:
model {
faster_rcnn {
num_classes: 1
image_resizer {
fixed_shape_resizer {
height: 1000
width: 1000
resize_method: AREA
}
}
feature_extractor {
type: "faster_rcnn_inception_v2"
first_stage_features_stride: 16
}
first_stage_anchor_generator {
grid_anchor_generator {
height_stride: 16
width_stride: 16
scales: 0.25
scales: 0.5
scales: 1.0
scales: 2.0
aspect_ratios: 0.5
aspect_ratios: 1.0
aspect_ratios: 2.0
}
}
first_stage_box_predictor_conv_hyperparams {
op: CONV
regularizer {
l2_regularizer {
weight: 0.0
}
}
initializer {
truncated_normal_initializer {
stddev: 0.00999999977648
}
}
}
first_stage_nms_score_threshold: 0.0
first_stage_nms_iou_threshold: 0.699999988079
first_stage_max_proposals: 300
first_stage_localization_loss_weight: 2.0
first_stage_objectness_loss_weight: 1.0
initial_crop_size: 14
maxpool_kernel_size: 2
maxpool_stride: 2
second_stage_box_predictor {
mask_rcnn_box_predictor {
fc_hyperparams {
op: FC
regularizer {
l2_regularizer {
weight: 0.0
}
}
initializer {
variance_scaling_initializer {
factor: 1.0
uniform: true
mode: FAN_AVG
}
}
}
use_dropout: false
dropout_keep_probability: 1.0
}
}
second_stage_post_processing {
batch_non_max_suppression {
score_threshold: 0.0
iou_threshold: 0.600000023842
max_detections_per_class: 100
max_total_detections: 300
}
score_converter: SOFTMAX
}
second_stage_localization_loss_weight: 2.0
second_stage_classification_loss_weight: 1.0
}
}
train_config {
batch_size: 8
data_augmentation_options {
random_horizontal_flip {
}
}
optimizer {
adam_optimizer {
learning_rate {
manual_step_learning_rate {
initial_learning_rate: 0.00010000000475
schedule {
step: 40000
learning_rate: 3.00000010611e-05
}
}
}
}
use_moving_average: true
}
gradient_clipping_by_norm: 10.0
fine_tune_checkpoint: "/home/deploy/models/research/object_detection/faster_rcnn_inception_v2_coco_2018_01_28/model.ckpt"
from_detection_checkpoint: true
num_steps: 60000
max_number_of_boxes: 100
}
train_input_reader {
label_map_path: "/home/deploy/models/research/object_detection/Training_carrot_060219/carrot_identify.pbtxt"
tf_record_input_reader {
input_path: "/home/deploy/models/research/object_detection/Training_carrot_060219/train.record"
}
}
eval_config {
num_visualizations: 100
num_examples: 135
eval_interval_secs: 60
use_moving_averages: false
}
eval_input_reader {
label_map_path: "/home/deploy/models/research/object_detection/Training_carrot_060219/carrot_identify.pbtxt"
shuffle: true
num_epochs: 1
num_readers: 1
tf_record_input_reader {
input_path: "/home/deploy/models/research/object_detection/Training_carrot_060219/test.record"
}
sample_1_of_n_examples: 1
}
为 tf 服务导出模型时,配置文件和检查点文件应相互对应。
问题是在导出自定义训练模型时,您使用的是带有新检查点文件的旧配置文件。