如何解决由于“模型需要比允许的更多的内存”,因此无法使用 Google Cloud AI Platform 部署用于预测服务的小型转换器模型
我有一个经过微调的 distilgpt2
模型,我想使用 GCP 人工智能平台进行部署。
我遵循了在 GCP 上部署自定义预测例程的所有文档,但在创建模型时出现错误:
创建版本失败。检测到错误模型错误:模型需要的内存超出允许范围。请尝试减小模型大小并重新部署。
这是我的 setup.py
文件:
from setuptools import setup
setup(
name="generator_package",version="0.2",include_package_data=True,scripts=["generator_class.py"],install_requires=['transformers==2.8.0']
)
然后我使用以下方法创建模型版本:
gcloud beta ai-platform versions create v1 --model my_model \
--origin=gs://my_bucket/model/ \
--python-version=3.7 \
--runtime-version=2.3 \
--package-uris=gs://my_bucket/packages/gpt2-0.1.tar.gz,gs://cloud-ai-pytorch/torch-1.3.1+cpu-cp37-cp37m-linux_x86_64.whl \
--prediction-class=model_prediction.CustomModelPrediction
按照这个答案:PyTorch model deployment in AI Platform,我想出了如何在我的自定义预测例程中安装 pytorch,但仍然出现上述错误。我相信它可能与 transformers
包有关,因为它有 torch
作为依赖项。这会导致问题吗?
我已尝试了所有建议的路线,但无法使其正常工作,但仍然出现上述错误。我正在使用最小的 gpt2 模型并且完全在内存中。
成功部署到 GCP 的任何人都可以在这里提供一些见解。
更新:
因此,为了解决 transformers
还尝试安装 torch
的上述问题,这可能会导致问题,我从源代码重建了 .whl
文件并删除了其他软件包,下面是编辑过的 setup.py
文件并使用 python setup.py bdist_wheel
构建。
然后,我在 GCP 中创建模型版本时将此 whl
添加到所需的依赖项中,并从我自己的 transformers==2.8.0
中删除了 setup.py
。但它仍然给出同样的错误模型需要更多的内存=(
import shutil
from pathlib import Path
from setuptools import find_packages,setup
# Remove stale transformers.egg-info directory to avoid https://github.com/pypa/pip/issues/5466
stale_egg_info = Path(__file__).parent / "transformers.egg-info"
if stale_egg_info.exists():
print(
(
"Warning: {} exists.\n\n"
"If you recently updated transformers to 3.0 or later,this is expected,\n"
"but it may prevent transformers from installing in editable mode.\n\n"
"This directory is automatically generated by Python's packaging tools.\n"
"I will remove it now.\n\n"
"See https://github.com/pypa/pip/issues/5466 for details.\n"
).format(stale_egg_info)
)
shutil.rmtree(stale_egg_info)
extras = {}
extras["mecab"] = ["mecab-python3"]
extras["sklearn"] = ["scikit-learn"]
# extras["tf"] = ["tensorflow"]
# extras["tf-cpu"] = ["tensorflow-cpu"]
# extras["torch"] = ["torch"]
extras["serving"] = ["pydantic","uvicorn","fastapi","starlette"]
extras["all"] = extras["serving"] + ["tensorflow","torch"]
extras["testing"] = ["pytest","pytest-xdist"]
extras["docs"] = ["recommonmark","sphinx","sphinx-markdown-tables","sphinx-rtd-theme"]
extras["quality"] = [
"black","isort","flake8",]
extras["dev"] = extras["testing"] + extras["quality"] + ["mecab-python3","scikit-learn","tensorflow","torch"]
setup(
name="transformers",version="2.8.0",author="Thomas Wolf,Lysandre Debut,Victor Sanh,Julien Chaumond,Sam Shleifer,Google AI Language Team Authors,Open AI team Authors,Facebook AI Authors,Carnegie Mellon University Authors",author_email="thomas@huggingface.co",description="State-of-the-art Natural Language Processing for TensorFlow 2.0 and PyTorch",long_description=open("README.md","r",encoding="utf-8").read(),long_description_content_type="text/markdown",keywords="NLP deep learning transformer pytorch tensorflow BERT GPT GPT-2 google openai CMU",license="Apache",url="https://github.com/huggingface/transformers",package_dir={"": "src"},packages=find_packages("src"),install_requires=[
"numpy","tokenizers == 0.5.2",# dataclasses for Python versions that don't have it
"dataclasses;python_version<'3.7'",# accessing files from S3 directly
"boto3",# filesystem locks e.g. to prevent parallel downloads
"filelock",# for downloading models over HTTPS
"requests",# progress bars in model download and training scripts
"tqdm >= 4.27",# for OpenAI GPT
"regex != 2019.12.17",# for XLNet
"sentencepiece",# for XLM
"sacremoses",],extras_require=extras,scripts=["transformers-cli"],python_requires=">=3.6.0",classifiers=[
"Development Status :: 5 - Production/Stable","Intended Audience :: Developers","Intended Audience :: Education","Intended Audience :: Science/Research","License :: OSI Approved :: Apache Software License","Operating System :: OS Independent","Programming Language :: Python :: 3","Programming Language :: Python :: 3.6","Programming Language :: Python :: 3.7","Topic :: Scientific/Engineering :: Artificial Intelligence",)
版权声明:本文内容由互联网用户自发贡献,该文观点与技术仅代表作者本人。本站仅提供信息存储空间服务,不拥有所有权,不承担相关法律责任。如发现本站有涉嫌侵权/违法违规的内容, 请发送邮件至 dio@foxmail.com 举报,一经查实,本站将立刻删除。