如何解决Google AI平台预测错误-对象检测API模型-HttpError 400-张量名称的批次大小不一致
我需要使用TensorFlow Object Detection API进行远程在线预测。我正在尝试使用Google AI平台。当我在AI平台上进行对象检测模型的在线预测时,出现类似以下错误:
HttpError 400 Tensor name: num_proposals has inconsistent batch size: 1 expecting: 49152
当我在本地执行预测时(例如result = model(image)
),我得到了预期的结果。
对于各种Object Detection模型(Mask-RCNN和MobileNet),都会发生此错误。该错误发生在我训练的对象检测模型上,并且直接从Object Detection Model Zoo (v2)加载。我使用相同的代码获得了成功的结果,但是在AI平台上部署的不是对象检测的模型。
签名信息
模型输入signature-def
似乎是正确的:
!saved_model_cli show --dir {MODEL_DIR_GS}
!saved_model_cli show --dir {MODEL_DIR_GS} --tag_set serve
!saved_model_cli show --dir {MODEL_DIR_GS} --tag_set serve --signature_def serving_default
给予:
The given SavedModel contains the following tag-sets:
serve
The given SavedModel MetaGraphDef contains SignatureDefs with the following keys:
SignatureDef key: "__saved_model_init_op"
SignatureDef key: "serving_default"
The given SavedModel SignatureDef contains the following input(s):
inputs['input_tensor'] tensor_info:
dtype: DT_UINT8
shape: (1,-1,3)
name: serving_default_input_tensor:0
The given SavedModel SignatureDef contains the following output(s):
outputs['anchors'] tensor_info:
dtype: DT_FLOAT
shape: (-1,4)
name: StatefulPartitionedCall:0
outputs['box_classifier_features'] tensor_info:
dtype: DT_FLOAT
shape: (300,9,1536)
name: StatefulPartitionedCall:1
outputs['class_predictions_with_background'] tensor_info:
dtype: DT_FLOAT
shape: (300,2)
name: StatefulPartitionedCall:2
outputs['detection_anchor_indices'] tensor_info:
dtype: DT_FLOAT
shape: (1,100)
name: StatefulPartitionedCall:3
outputs['detection_boxes'] tensor_info:
dtype: DT_FLOAT
shape: (1,100,4)
name: StatefulPartitionedCall:4
outputs['detection_classes'] tensor_info:
dtype: DT_FLOAT
shape: (1,100)
name: StatefulPartitionedCall:5
outputs['detection_masks'] tensor_info:
dtype: DT_FLOAT
shape: (1,33,33)
name: StatefulPartitionedCall:6
outputs['detection_multiclass_scores'] tensor_info:
dtype: DT_FLOAT
shape: (1,2)
name: StatefulPartitionedCall:7
outputs['detection_scores'] tensor_info:
dtype: DT_FLOAT
shape: (1,100)
name: StatefulPartitionedCall:8
outputs['final_anchors'] tensor_info:
dtype: DT_FLOAT
shape: (1,300,4)
name: StatefulPartitionedCall:9
outputs['image_shape'] tensor_info:
dtype: DT_FLOAT
shape: (4)
name: StatefulPartitionedCall:10
outputs['mask_predictions'] tensor_info:
dtype: DT_FLOAT
shape: (100,1,33)
name: StatefulPartitionedCall:11
outputs['num_detections'] tensor_info:
dtype: DT_FLOAT
shape: (1)
name: StatefulPartitionedCall:12
outputs['num_proposals'] tensor_info:
dtype: DT_FLOAT
shape: (1)
name: StatefulPartitionedCall:13
outputs['proposal_boxes'] tensor_info:
dtype: DT_FLOAT
shape: (1,4)
name: StatefulPartitionedCall:14
outputs['proposal_boxes_normalized'] tensor_info:
dtype: DT_FLOAT
shape: (1,4)
name: StatefulPartitionedCall:15
outputs['raw_detection_boxes'] tensor_info:
dtype: DT_FLOAT
shape: (1,4)
name: StatefulPartitionedCall:16
outputs['raw_detection_scores'] tensor_info:
dtype: DT_FLOAT
shape: (1,2)
name: StatefulPartitionedCall:17
outputs['refined_box_encodings'] tensor_info:
dtype: DT_FLOAT
shape: (300,4)
name: StatefulPartitionedCall:18
outputs['rpn_box_encodings'] tensor_info:
dtype: DT_FLOAT
shape: (1,12288,4)
name: StatefulPartitionedCall:19
outputs['rpn_objectness_predictions_with_background'] tensor_info:
dtype: DT_FLOAT
shape: (1,2)
name: StatefulPartitionedCall:20
Method name is: tensorflow/serving/predict
复制步骤
-
从TensorFlow Model Zoo下载模型。
-
部署到AI平台
!gcloud config set project $PROJECT
!gcloud beta ai-platform models create $MODEL --regions=us-central1
%%bash -s $PROJECT $MODEL $VERSION $MODEL_DIR_GS
gcloud ai-platform versions create $3 \
--project $1 \
--model $2 \
--origin $4 \
--runtime-version=2.1 \
--framework=tensorflow \
--python-version=3.7 \
--machine-type=n1-standard-2 \
--accelerator type=nvidia-tesla-t4
- 远程评估
import googleapiclient
import numpy as np
import socket
img_np = np.zeros((100,3),dtype=np.uint8)
img_list = img_np.to_list()
instances = [img_list]
socket.setdefaulttimeout(600) # set timeout to 10 minutes
service = googleapiclient.discovery.build('ml','v1',cache_discovery=False,)
model_version_string = 'projects/{}/models/{}/versions/{}'.format(PROJECT,MODEL,VERSION)
print(model_version_string)
response = service.projects().predict(
name=model_version_string,body={'instances': instances}
).execute()
if 'error' in response:
raise RuntimeError(response['error'])
else:
print(f'Success. # keys={response.keys()}')
我收到类似以下错误:
HttpError: <HttpError 400 when requesting
https://ml.googleapis.com/v1/projects/gcp_project/models/error_demo/versions/mobilenet:predict?alt=json
returned "{ "error": "Tensor name: refined_box_encodings has inconsistent batch size: 300
expecting: 1"}}>
其他信息
-
如果我将请求正文中的
instances
变量从instances = [img_list]
更改为instances = [{'input_tensor':img_list}]
,则代码将失败。 -
如果我故意使用了不正确的输入形状(例如
(1,2)
或(100,2)
,则会收到响应,指出输入形状不正确。
invalidArgument -- The value for one of fields in the request body was invalid.
-
如果重复此预测步骤,则会得到相同的错误消息,但张量的名称不同。
-
如果我使用
运行该进程gcloud
import json
x = {"instances":[
[
[
[0,0],[0,0]
],[
[0,0]
]
]
]
}
with open('test.json','w') as f:
json.dump(x,f)
!gcloud ai-platform predict --model $MODEL --json-request=./test.json
我遇到INVALID_ARGUMENT
错误。
ERROR: (gcloud.ai-platform.predict) HTTP request failed. Response: {
"error": {
"code": 400,"message": "{ \"error\": \"Tensor name: anchors has inconsistent batch size: 49152 expecting: 1\" }","status": "INVALID_ARGUMENT"
}
}
- 如果我使用Google Cloud Console(AI平台
Test & Use
屏幕的Version Details
标签或{{ 1}}
我启用了日志记录(常规日志记录和控制台日志记录),但未提供其他信息。
我已将复制所需的详细信息放在AI Platform Prediction JSON documentation中。
先谢谢了。我花了整整一天的时间来做,真的很困!!
解决方法
根据 https://github.com/tensorflow/serving/issues/1047,当请求使用 instances
键时,TensorFlow Serving 确保输出的所有组件具有相同的批次大小。解决方法是使用 inputs
关键字。
例如
inputs = [img_list]
...
response = service.projects().predict(
name=model_version_string,body={'inputs': inputs}
版权声明:本文内容由互联网用户自发贡献,该文观点与技术仅代表作者本人。本站仅提供信息存储空间服务,不拥有所有权,不承担相关法律责任。如发现本站有涉嫌侵权/违法违规的内容, 请发送邮件至 dio@foxmail.com 举报,一经查实,本站将立刻删除。