如何解决两个嵌入模型的联合训练KGE + GloVe
如何创建共享知识图嵌入 (KGE) 模型、TuckER(如下所示)和 glove(假设有一个共现矩阵以及维度)的参数的联合模型已经可用)在pytorch?
换句话说,联合模型必须遵守CMTF的准则(Coupled Matrix and Tensor F actorizations) 框架和来自两个嵌入的权重必须在训练期间绑定。这里的问题是 KGE 需要一个三元组(主题、关系、对象),而 glove 需要一个共现矩阵。此外,它们的损失函数的计算方式也不同。
class TuckER(torch.nn.Module):
def __init__(self,d,d1,d2,**kwargs):
super(TuckER,self).__init__()
self.E = torch.nn.Embedding(len(d.entities),d1)
self.R = torch.nn.Embedding(len(d.relations),d2)
self.W = torch.nn.Parameter(torch.tensor(np.random.uniform(-1,1,(d2,d1)),dtype=torch.float,device="cuda",requires_grad=True))
self.input_dropout = torch.nn.Dropout(kwargs["input_dropout"])
self.hidden_dropout1 = torch.nn.Dropout(kwargs["hidden_dropout1"])
self.hidden_dropout2 = torch.nn.Dropout(kwargs["hidden_dropout2"])
self.loss = torch.nn.bceloss()
self.bn0 = torch.nn.Batchnorm1d(d1)
self.bn1 = torch.nn.Batchnorm1d(d1)
def init(self):
xavier_normal_(self.E.weight.data)
xavier_normal_(self.R.weight.data)
def forward(self,e1_idx,r_idx):
e1 = self.E(e1_idx)
x = self.bn0(e1)
x = self.input_dropout(x)
x = x.view(-1,e1.size(1))
r = self.R(r_idx)
W_mat = torch.mm(r,self.W.view(r.size(1),-1))
W_mat = W_mat.view(-1,e1.size(1),e1.size(1))
W_mat = self.hidden_dropout1(W_mat)
x = torch.bmm(x,W_mat)
x = x.view(-1,e1.size(1))
x = self.bn1(x)
x = self.hidden_dropout2(x)
x = torch.mm(x,self.E.weight.transpose(1,0))
pred = torch.sigmoid(x)
return pred
我知道如何通过加载状态字典、获取实例、在两个模型上运行它们,然后在顶部应用前馈层来联合训练两个预训练模型。但我似乎无法弄清楚这种情况。你能建议我如何实现这一目标吗?
重要资源:
- TuckER 代码 - https://github.com/ibalazevic/TuckER
版权声明:本文内容由互联网用户自发贡献,该文观点与技术仅代表作者本人。本站仅提供信息存储空间服务,不拥有所有权,不承担相关法律责任。如发现本站有涉嫌侵权/违法违规的内容, 请发送邮件至 dio@foxmail.com 举报,一经查实,本站将立刻删除。