微信公众号搜"智元新知"关注
微信扫一扫可直接关注哦!

Colab资源和自我注意分配张量时为OOM

如何解决Colab资源和自我注意分配张量时为OOM

我正在尝试使用Keras在google Colab上实现自我注意GAN。当我测试注意层时,出现OOM错误。那么,我在矩阵乘法上做错什么了吗?或者对于高分辨率(> 64 x 64)的colab GPU来说,这仅仅是一个昂贵的操作?

def hw_flatten(x):
   # Input shape x: [BATCH,HEIGHT,WIDTH,CHANNELS]
   # flat the feature volume across the width and height dimensions 

   x = Reshape((x.shape[1]*x.shape[2],x.shape[3]))(x) #in the Reshape layer batch is implicit

   return x # return [BATCH,W*H,CHANNELS]



def matmul(couple_t):
  tensor_1 = couple_t[0]
  tensor_2 = couple_t[1]
  transponse = couple_t[2] #boolean 

  return tf.matmul(tensor_1,tensor_2,transpose_b=transponse)



class SelfAttention(Layer):

  def _init_(self,ch,**kwargs):
    super(SelfAttention,self).__init__(**kwargs)
    self.ch = ch

  
  def attentionMap(self,feature_map):

    f = Conv2D(filters=feature_map.shape[3]/8,kernel_size=(1,1),strides=1,padding='same')(feature_map) # [bs,h,w,c']
    g = Conv2D(filters=feature_map.shape[3]/8,c']
    h = Conv2D(filters=feature_map.shape[3],padding='same')(feature_map)   # [bs,c']

    s = Lambda(matmul)([hw_flatten(g),hw_flatten(f),True]) # [bs,N,N]
    beta = Activation("softmax")(s)

    o = Lambda(matmul)([beta,hw_flatten(h),False]) # [bs,C]


    gamma = self.add_weight(name='gamma',shape=[1],initializer='zeros',trainable=True)

    o = Reshape((feature_map.shape[1:]))(o) # [bs,C]

    x = gamma * o + feature_map

    print(x.shape)

    return x

这是测试:

tensor = np.random.normal(0,1,size=(32,64,512)).astype('float64')
attention_o = SelfAttention(64)
a = attention_o.attentionMap(tensor)

这是错误

OOM when allocating tensor with shape[32,4096,4096] and type double

非常感谢您的关注:D

解决方法

您的32x4096x4096的张量具有536870912条目!这乘以双精度数(8)中的字节数,然后转换为Gb就是4294!超过4Tb,绝对不适合GPU。在应用自我关注之前,您可能希望添加一些最大的缓冲层以减少数据的维数。

版权声明:本文内容由互联网用户自发贡献,该文观点与技术仅代表作者本人。本站仅提供信息存储空间服务,不拥有所有权,不承担相关法律责任。如发现本站有涉嫌侵权/违法违规的内容, 请发送邮件至 dio@foxmail.com 举报,一经查实,本站将立刻删除。