微信公众号搜"智元新知"关注
微信扫一扫可直接关注哦!

pytorch 如何更加重视聚合特征、注意力机制

如何解决pytorch 如何更加重视聚合特征、注意力机制

    def forward(self,nodes_batch):
            """
...
            #Initiate self feature of center node
            pre_hidden_embs = self.raw_features
            for index in range(1,self.num_layers+1):
                #nb = lower_layer_nodes of 1st order,followed by 2nd order
                nb = nodes_batch_layers[index][0]
                #Extract 3 tuples from 2nd order,followed by 1st
                pre_neighs = nodes_batch_layers[index-1]
                # self.dc.logger.info('aggregate_feats.')
                #Aggregate 1st layer unique nodes,self,2nd layer unique nodes/direct neighs + self
                #Aggregate 2nd layer unique nodes,aggregate_feats = self.aggregate(nb,pre_hidden_embs,pre_neighs)
                sage_layer = getattr(self,'sage_layer'+str(index))
                if index > 1:
                    #_node_map returns index of lower_layer_nodes_dict --> unique center nodes of layer 0
                    nb = self._nodes_map(nb,pre_neighs)
                    **#aggregate_feats = 2*self.aggregate(nb,pre_neighs)**
                # self.dc.logger.info('sage_layer.')
                #W/ nb index,retrieve self + aggregate_feats embeddings (2nd order +1st,then 1st + zero layers)
                cur_hidden_embs = sage_layer(self_feats=pre_hidden_embs[nb],aggregate_feats=aggregate_feats)
                #Agg neigh of 2nd layer to 1st layer
                #Then aggreg 1st layer(aggregated earlier) to zero layer
                #From outside to inside
                pre_hidden_embs = cur_hidden_embs
    
            return pre_hidden_embs

请注意以下已停用的内容,如果它是第一层到中心节点,我计划为这个aggregate_feats分配更多权重,如果它是第二层到第一层,则为aggregate_feats分配更少权重。我可以知道如何实现这一目标吗? #aggregate_feats = 2*self.aggregate(nb,pre_hidden_​​embs,pre_neighs)

换句话说,我如何为某个嵌入分配更多的权重,在这种情况下,它是aggregate_feats。

版权声明:本文内容由互联网用户自发贡献,该文观点与技术仅代表作者本人。本站仅提供信息存储空间服务,不拥有所有权,不承担相关法律责任。如发现本站有涉嫌侵权/违法违规的内容, 请发送邮件至 dio@foxmail.com 举报,一经查实,本站将立刻删除。