如何解决与原始 Transformer 模型相比,为什么 Torchscript 跟踪会返回不同外观的编码输入?
背景
我正在使用经过微调的 Mbart50 模型,我需要加快推理速度,因为按原样使用 HuggingFace 模型对于我当前的硬件来说相当慢。我想使用 TorchScript,因为我无法让 onnx 导出此特定模型,因为它似乎稍后会得到支持(否则我很高兴出错)。
将 Transformer 转换为 Pytorch 跟踪:
SELECT
FirsTNAME,MIDINIT,LASTNAME
FROM emp emp1
WHERE EXISTS ( SELECT *
FROM emp emp2
WHERE emp1.EMPNO = emp2.EMPNO
AND emp2.JOB='MANAGER' );
推理步骤:
import torch
""" Model data """
from transformers import MBartForConditionalGeneration,MBart50TokenizerFast
device = torch.device('cuda') if torch.cuda.is_available() else torch.device('cpu')
model = MBartForConditionalGeneration.from_pretrained("facebook/mbart-large-50-one-to-many-mmt",torchscript= True)
tokenizer = MBart50TokenizerFast.from_pretrained("facebook/mbart-large-50-one-to-many-mmt")
tokenizer.src_lang = 'en_XX'
dummy = "To celebrate World Oceans Day,we're swimming through a shoal of jack fish just off the coast of Baja,California,in Cabo Pulmo National Park. This Mexican marine park in the Sea of Cortez is home to the northernmost and oldest coral reef on the west coast of north America,estimated to be about 20,000 years old. Jacks are clearly plentiful here,but divers and snorkelers in Cabo Pulmo can also come across many other species of fish and marine mammals,including several varieties of sharks,whales,dolphins,tortoises,and manta rays."
model.config.forced_bos_token_id=250006
myTokenBatch = tokenizer(dummy,max_length=192,padding='max_length',truncation = True,return_tensors="pt")
torch.jit.save(torch.jit.trace(model,[myTokenBatch.input_ids,myTokenBatch.attention_mask]),"././traced-model/mbart-many.pt")
预期的编码输出:
这些是可以使用 MBart50TokenizerFast 解码为单词的标记。
import torch
""" Model data """
from transformers import MBart50TokenizerFast
device = torch.device('cuda') if torch.cuda.is_available() else torch.device('cpu')
tokenizer = MBart50TokenizerFast.from_pretrained("facebook/mbart-large-50-one-to-many-mmt")
model = torch.jit.load('././traced-model/mbart-many.pt')
MAX_LENGTH = 192
tokenizer = MBart50TokenizerFast.from_pretrained("facebook/mbart-large-50-one-to-many-mmt")
model.to(device)
model.eval()
tokenizer.src_lang = 'en_XX'
dummy = "To celebrate World Oceans Day,and manta rays."
myTokenBatch = tokenizer(dummy,return_tensors="pt")
encode,pool,norm = model(myTokenBatch.input_ids,myTokenBatch.attention_mask)
实际输出:
我不知道这是什么...
tensor([[250004,717,176016,6661,55609,7,10013,4,642,25,107,192298,8305,10,15756,289,111,121477,67155,1660,5773,70,184085,118191,39897,23,143740,21694,432,9907,5227,5,3293,181815,122084,9201,27414,48892,169,83,5368,47,144477,9022,840,18,136,10332,525,184518,456,4240,98,65272,23924,21629,25902,3674,186,1672,6,91578,5369,21763,621,123019,32328,118,7844,3688,1284,41767,120379,2590,1314,831,2843,1380,36880,5941,3789,114149,21968,8080,26719,40368,285,68794,54524,1224,148,50742,13111,19379,1779,43807,125216,332,102,62656,2,1,1]])
print(encode)
解决方法
在这里找到答案:https://stackoverflow.com/a/66117248/13568346
您不能使用此方法直接转换 seq2seq 模型(编码器-解码器模型)。要转换 seq2seq 模型(编码器-解码器),您必须将它们拆分并分别转换,将编码器转换为 onnx,将解码器转换为 onnx。你可以按照这个 guide (它是为 T5 完成的,它也是一个 seq2seq 模型)。您需要分别为编码器和解码器提供一个虚拟变量。默认情况下,使用此方法进行转换时,它会为编码器提供虚拟变量。
版权声明:本文内容由互联网用户自发贡献,该文观点与技术仅代表作者本人。本站仅提供信息存储空间服务,不拥有所有权,不承担相关法律责任。如发现本站有涉嫌侵权/违法违规的内容, 请发送邮件至 dio@foxmail.com 举报,一经查实,本站将立刻删除。