微信公众号搜"智元新知"关注
微信扫一扫可直接关注哦!

获取“创建版本”失败尝试在Google Cloud AI平台上创建自定义模型时,在AI平台上检测到错误模型并带有错误

如何解决获取“创建版本”失败尝试在Google Cloud AI平台上创建自定义模型时,在AI平台上检测到错误模型并带有错误

我正在尝试在AI平台上部署自定义模型。我已按照Google文档中所述的步骤操作:https://cloud.google.com/ai-platform/prediction/docs/deploying-models#global-endpoint

保存的模型存储在Google Cloud Storage中,并使用python 3.7进行培训。

这些是用于部署的gcloud命令

gcloud ai-platform models create title_topic_custom \
  --regions=europe-west1 --enable_logging
MODEL_DIR="gs://ai_platform_custom/SavedModel"
VERSION_NAME="V3"
MODEL_NAME="title_topic_custom"
CUSTOM_CODE_PATH="gs://ai_platform_custom/SavedModel/my_custom_code-0.1.tar.gz"
PREDICTOR_CLASS="predictor.py.MyPredictor"
gcloud beta ai-platform versions create $VERSION_NAME \
  --model=$MODEL_NAME \
  --origin=$MODEL_DIR \
  --runtime-version=2.1 \
  --python-version=3.7 \
  --machine-type=mls1-c1-m2 \
  --package-uris=$CUSTOM_CODE_PATH \
  --prediction-class=$PREDICTOR_CLASS

执行这些命令后出现以下错误

Using endpoint [https://ml.googleapis.com/]
Creating version (this might take a few minutes)......Failed.                                                                                                                          
ERROR: (gcloud.beta.ai-platform.versions.create) Create Version Failed. Bad model detected with error:  "There was a problem processing the user code: predictor.py.MyPredictor cannot be found. Please make sure (1) prediction_class is the fully qualified function name,and (2) it uses the correct package name as provided by the package_uris: ['gs://ai_platform_custom/SavedModel/my_custom_code-0.1.tar.gz'] (Error code: 4)"

预测变量代码如下:

%%writefile predictor.py
import os
import spacy
import numpy as np
import joblib
import tensorflow as tf
import nltk
nltk.download('stopwords')
from nltk.corpus import stopwords
class MyPredictor(object):
  def __init__(self,model,topic_encoder):
    self._model = model
    self._nlp = spacy.load('en_core_web_sm')
    self._stopwords = stopwords.words('english')
    self._topic_encoder = topic_encoder
  def predict(self,instances,**kwargs):
    inputs = np.asarray(instances)
    inputs_t = [' '.join([i for i in x.split() if i not in self._stopwords]) for x in inputs]
    preprocessed_inputs = [' '.join([i.lemma_ for i in self._nlp(x)]) for x in inputs_t]
    outputs = self._model.predict(preprocessed_inputs)
    return [self._topic_encoder[key] for key in np.argmax(outputs,axis=1)]
  @classmethod
  def from_path(cls,model_dir):
    model_path = os.path.join(model_dir)
    model = tf.keras.models.load_model(model_path)
    topic_encoder = {0:'topic1',1:'topic2',3:'topic3'}
    return cls(model,topic_encoder)

这是设置文件

from setuptools import setup
setup(
    name='my-custom-code',version='0.1',install_requires=['nltk','spacy','joblib'],scripts=['predictor.py'])

有什么解决方法吗?

版权声明:本文内容由互联网用户自发贡献,该文观点与技术仅代表作者本人。本站仅提供信息存储空间服务,不拥有所有权,不承担相关法律责任。如发现本站有涉嫌侵权/违法违规的内容, 请发送邮件至 dio@foxmail.com 举报,一经查实,本站将立刻删除。