微信公众号搜"智元新知"关注
微信扫一扫可直接关注哦!

将python / numpy索引传输到Tensorflow并提高性能

如何解决将python / numpy索引传输到Tensorflow并提高性能

在先前的问题here中,我请求有关将项目更快分配给数组的建议。从那时起,我取得了一些进展,例如,我扩展了推荐的版本以处理3-D阵列,该模型的目的是类似于以后的神经网络训练数据的批量大小:

import numpy as np
import time

batch_dim = 2
first_dim = 5
second_dim = 7
depth_dim = 10

upper_count = 5000

toy_dict = {k:np.random.random_sample(size = depth_dim) for k in range(upper_count)}
a = np.array(list(toy_dict.values()))

def create_input_3d(orig_arr):
  print("Input shape:",orig_arr.shape)
  goal_arr = np.full(shape=(batch_dim,orig_arr.shape[1],orig_arr.shape[2],depth_dim),fill_value=1234,dtype=float)

  print("Goal shape:",goal_arr.shape)

  idx = np.indices(orig_arr.shape)
  print("Idx shape",idx.shape)
  goal_arr[idx[0],idx[1],idx[2]] = a[orig_arr[idx[0],idx[2]]]

  return goal_arr

orig_arr_three_dim = np.random.randint(0,upper_count,size=(batch_dim,first_dim,second_dim))
orig_arr_three_dim.shape # (2,5,7)

reshaped = create_input_3d(orig_arr_three_dim)

然后我决定创建一个自定义层来提高性能并即时进行转换(减少内存):

import tensorflow as tf
from tensorflow import keras
import numpy as np

#custom layer
class CustLayer(keras.layers.Layer):
    def __init__(self,info_matrix,second_dim,info_dim,batch_size):
        super(CustLayer,self).__init__()
        self.w = tf.Variable(
            initial_value=info_matrix,trainable=False,dtype=tf.dtypes.float32
        )
        self.info_dim = info_dim
        self.first_dim = first_dim
        self.second_dim = second_dim
        self.batch_size = batch_size

    def call(self,orig_arr):

        goal_arr = tf.Variable(tf.zeros(shape=(self.batch_size,self.first_dim,self.second_dim,self.info_dim),dtype=float))

        #loop-approach (slower)
        for example in tf.range(self.batch_size):
          for row in tf.range(self.first_dim):
            for col in tf.range(self.second_dim):
              goal_arr[example,row,col].assign(self.w[orig_arr[example,col]])
        
        return goal_arr

upper_count = 50
info_length = 10
batch_size = 4

first_dim = 5
second_dim = 7
info_dim = 10

info_dict = {k:np.random.random_sample(size = info_length) for k in range(upper_count)} #toy dict that stores information about
info_matrix = np.array(list(info_dict.values()))


linear_layer = CustLayer(info_matrix,first_dim=first_dim,second_dim=second_dim,info_dim=info_dim,batch_size=batch_size)

test = []
for i in range(batch_size):
  test.append(np.random.randint(1,size=(first_dim,second_dim)))

test = np.asarray(test)
test.shape # (4,7)

y= linear_layer(test)
y.shape # TensorShape([4,7,10])

由于高级索引编制(如我的第一个发布代码)不起作用,因此我返回了朴素的for循环-太慢了。

我正在寻找一种使用高级索引的方法,如第一个代码段所示,并对其进行tf兼容的重新编程。稍后,我可以使用GPU进行学习。

简而言之:输入的形状为(batch_size,second_dim),返回的形状为(batch_size,info_dim),摆脱了缓慢的for循环。预先感谢。

我查看过的其他答案: from 2016also old tf

解决方法

对于其他寻求答案的人,这是我最终想出的:

import tensorflow as tf
from tensorflow import keras
import numpy as np
import time

class CustLayer(keras.layers.Layer):
    def __init__(self,info_matrix,first_dim,second_dim,info_dim,batch_size):
        super(CustLayer,self).__init__()
        self.w = tf.Variable(
            initial_value=info_matrix,trainable=False,dtype=tf.dtypes.float32
        )
        self.info_matrix = info_matrix
        self.info_dim = info_dim
        self.first_dim = first_dim
        self.second_dim = second_dim
        self.batch_size = batch_size
   
    def my_numpy_func(self,x):
      # x will be a numpy array with the contents of the input to the
      # tf.function
      shape = x.shape
      goal_arr = np.zeros(shape=(shape[0],shape[1],shape[2],self.info_dim),dtype=np.float32)

      # indices to expand
      idx = np.indices(shape)
      goal_arr[idx[0],idx[1],idx[2]] = self.info_matrix[x[idx[0],idx[2]]]

      shape_arr = np.array([shape[0],shape[2]],dtype=np.int8)
      #tf.print("Shape:",shape)
      #tf.print("Shape_arr:",shape_arr)
      #tf.print("Type:",type(shape_arr))
      return goal_arr,shape_arr

    @tf.function(input_signature=[tf.TensorSpec((None,39,25),tf.int64)])
    def tf_function(self,input):
      
      y,shape_arr = tf.numpy_function(self.my_numpy_func,[input],[tf.float32,tf.int8],"Nameless")
      #tf.print("shape_arr",shape_arr)
      y = tf.reshape(y,shape=(shape_arr[0],shape_arr[1],shape_arr[2],self.info_dim))
      return y

    def call(self,orig_arr):
      return self.tf_function(orig_arr)
      

注意事项:在GPU上运行,但不在TPU上运行。

版权声明:本文内容由互联网用户自发贡献,该文观点与技术仅代表作者本人。本站仅提供信息存储空间服务,不拥有所有权,不承担相关法律责任。如发现本站有涉嫌侵权/违法违规的内容, 请发送邮件至 dio@foxmail.com 举报,一经查实,本站将立刻删除。