微信公众号搜"智元新知"关注
微信扫一扫可直接关注哦!

我收到Tensorflow错误“ ValueError:不支持任何值”尝试填充tf数据集时有人可以帮我吗?

如何解决我收到Tensorflow错误“ ValueError:不支持任何值”尝试填充tf数据集时有人可以帮我吗?

我将数据存储在2(N x 3)级张量中,并且我正试图收集该张量的特定行,以产生要输入到Keras训练循环中的单个示例的数据。基本上,这个想法很简单:每个示例唯一的索引Ib(例如100)和Ie(例如207)指定我的输入数据是来自第76、77、78,...行的300 x 3张量。数据张量的100、105、110、115,...,205、207、208、209,...(每个示例总共300个索引)。注意Ib和Ie之间的跨度为5。虽然附加的代码在手动获取数据时可以正常工作(如代码示例所示),但是当Tensorflow稍后尝试执行相同操作时,会发生某些情况。另外,我知道,如果我只是在Ib之前和之后选择了固定数量的行,则下面的代码将起作用。当使用符号张量Ib,Ie(?)调用时,行updates = tf.range(start=Ib,limit=Ie,delta=5,dtype=tf.int32)似乎并没有产生任何合理的输出,从而导致代码在下一行中断。我正在使用Python 3.7 / Tensorflow 2.3.0,随附的代码会产生输出

2020-08-18 10:25:51.384252: I tensorflow/core/platform/cpu_feature_guard.cc:142] This TensorFlow binary is optimized with oneAPI Deep Neural Network Library (onednN)to use the following cpu instructions in performance-critical operations:  AVX2 FMA
To enable them in other operations,rebuild TensorFlow with the appropriate compiler flags.
2020-08-18 10:25:51.394210: I tensorflow/compiler/xla/service/service.cc:168] XLA service 0x7ff732d46110 initialized for platform Host (this does not guarantee that XLA will be used). Devices:
2020-08-18 10:25:51.394226: I tensorflow/compiler/xla/service/service.cc:176]   StreamExecutor device (0): Host,Default Version

trainINDX =   0,X.shape = [ 300,3 ]
trainINDX =   1,3 ]
trainINDX =   2,3 ]
trainINDX =   3,3 ]
trainINDX =   4,3 ]
trainINDX =   5,3 ]
trainINDX =   6,3 ]
trainINDX =   7,3 ]
trainINDX =   8,3 ]
trainINDX =   9,3 ]

Traceback (most recent call last):
  File "testData.py",line 93,in <module>
    main( )
  File "testData.py",line 80,in main
    trainData = trainData.map(fetchData,num_parallel_calls=tf.data.experimental.AUTOTUNE)
  File "/Users/relaxation82/Library/Python/3.7/lib/python/site-packages/tensorflow/python/data/ops/dataset_ops.py",line 1702,in map
    preserve_cardinality=True)
  File "/Users/relaxation82/Library/Python/3.7/lib/python/site-packages/tensorflow/python/data/ops/dataset_ops.py",line 4084,in __init__
    use_legacy_function=use_legacy_function)
  File "/Users/relaxation82/Library/Python/3.7/lib/python/site-packages/tensorflow/python/data/ops/dataset_ops.py",line 3371,in __init__
    self._function = wrapper_fn.get_concrete_function()
  File "/Users/relaxation82/Library/Python/3.7/lib/python/site-packages/tensorflow/python/eager/function.py",line 2939,in get_concrete_function
    *args,**kwargs)
  File "/Users/relaxation82/Library/Python/3.7/lib/python/site-packages/tensorflow/python/eager/function.py",line 2906,in _get_concrete_function_garbage_collected
    graph_function,args,kwargs = self._maybe_define_function(args,kwargs)
  File "/Users/relaxation82/Library/Python/3.7/lib/python/site-packages/tensorflow/python/eager/function.py",line 3213,in _maybe_define_function
    graph_function = self._create_graph_function(args,line 3075,in _create_graph_function
    capture_by_value=self._capture_by_value),File "/Users/relaxation82/Library/Python/3.7/lib/python/site-packages/tensorflow/python/framework/func_graph.py",line 986,in func_graph_from_py_func
    func_outputs = python_func(*func_args,**func_kwargs)
  File "/Users/relaxation82/Library/Python/3.7/lib/python/site-packages/tensorflow/python/data/ops/dataset_ops.py",line 3364,in wrapper_fn
    ret = _wrapper_helper(*args)
  File "/Users/relaxation82/Library/Python/3.7/lib/python/site-packages/tensorflow/python/data/ops/dataset_ops.py",line 3299,in _wrapper_helper
    ret = autograph.tf_convert(func,ag_ctx)(*nested_args)
  File "/Users/relaxation82/Library/Python/3.7/lib/python/site-packages/tensorflow/python/autograph/impl/api.py",line 258,in wrapper
    raise e.ag_error_Metadata.to_exception(e)
ValueError: in user code:

    testData.py:57 fetchData  *
        indices    = tf.range(start=sampleCount,limit=sampleCount+updates.shape[0],dtype=tf.int32)
    /Users/relaxation82/Library/Python/3.7/lib/python/site-packages/tensorflow/python/ops/variables.py:1074 _run_op
        return tensor_oper(a.value(),*args,**kwargs)
    /Users/relaxation82/Library/Python/3.7/lib/python/site-packages/tensorflow/python/ops/math_ops.py:1125 binary_op_wrapper
        return func(x,y,name=name)
    /Users/relaxation82/Library/Python/3.7/lib/python/site-packages/tensorflow/python/util/dispatch.py:201 wrapper
        return target(*args,**kwargs)
    /Users/relaxation82/Library/Python/3.7/lib/python/site-packages/tensorflow/python/ops/math_ops.py:1443 _add_dispatch
        y = ops.convert_to_tensor(y,dtype_hint=x.dtype.base_dtype,name="y")
    /Users/relaxation82/Library/Python/3.7/lib/python/site-packages/tensorflow/python/framework/ops.py:1499 convert_to_tensor
        ret = conversion_func(value,dtype=dtype,name=name,as_ref=as_ref)
    /Users/relaxation82/Library/Python/3.7/lib/python/site-packages/tensorflow/python/framework/constant_op.py:338 _constant_tensor_conversion_function
        return constant(v,name=name)
    /Users/relaxation82/Library/Python/3.7/lib/python/site-packages/tensorflow/python/framework/constant_op.py:264 constant
        allow_broadcast=True)
    /Users/relaxation82/Library/Python/3.7/lib/python/site-packages/tensorflow/python/framework/constant_op.py:282 _constant_impl
        allow_broadcast=allow_broadcast))
    /Users/relaxation82/Library/Python/3.7/lib/python/site-packages/tensorflow/python/framework/tensor_util.py:444 make_tensor_proto
        raise ValueError("None values not supported.")

    ValueError: None values not supported.
import numpy as np
import tensorflow as tf

def main( ):

    inputData      = np.zeros((1000,3),dtype=np.float32)
    inputData[:,0] = np.sin(np.arange(1000)/360)
    inputData[:,1] = np.cos(np.arange(1000)/360)
    inputData[:,2] = np.sin(np.arange(1000)/360) * np.cos(np.arange(1000)/360) # Generate some input data
    inputData      = tf.convert_to_tensor(inputData,dtype=tf.float32)

    outputData     = np.random.randint(low=0,high=3,size=1000,dtype=np.int32) # Generate random output data

    eventData      = np.zeros((1000,2),dtype=np.int32)
    eventData[:,0] = np.arange(1000)                                                                  # Begin index of sparse sampling
    eventData[:,1] = np.arange(1000) + np.random.randint(low=80,high=121,dtype=np.int32) # End   index of sparse sampling
    eventData      = tf.convert_to_tensor(eventData,dtype=tf.int32)

    totalSampleCount = int(1000)
    eventCount       = int(1000)
    inputDim         = int(300)
    outputDim        = int(3)
    batchSize        = int(256)
    epochCount       = int(5)
    stepsPerEpoch    = np.floor(666/batchSize)

    trainINDX      = np.arange(666)
    validationINDX = np.arange(666,1000)

    model = tf.keras.Sequential([
        tf.keras.layers.Conv1D(filters=15,kernel_size=15,strides=1,padding='same',dilation_rate=1,activation='relu',use_bias=True,kernel_initializer='glorot_uniform',bias_initializer='zeros',input_shape=(inputDim,3)),tf.keras.layers.MaxPooling1D(pool_size=5,strides=None,padding='valid'),tf.keras.layers.Flatten(),tf.keras.layers.Dense(256,use_bias=True),tf.keras.layers.Dropout(0.50),tf.keras.layers.Dense(outputDim,activation='softmax')
    ])

    model.compile( optimizer=tf.keras.optimizers.Adam(learning_rate=1e-3,beta_1=0.9,beta_2=0.999,amsgrad=True),loss=tf.keras.losses.SparseCategoricalCrossentropy(),metrics=[tf.keras.metrics.SparseCategoricalAccuracy()])

    def fetchData(eventINDX,y): # This function picks up the requested elements from inputData

        sampleINDX = tf.zeros([inputDim],dtype=tf.int32) # Initialize sampleINDX to zero

        Ib = tf.gather_nd(eventData,[tf.cast(eventINDX,dtype=tf.int32),0]) # Begin of sparse sampling
        Ie = tf.gather_nd(eventData,1]) # End   of sparse sampling

        indices    = tf.range(start=0,limit=24,dtype=tf.int32)
        updates    = tf.range(start=Ib-24,limit=Ib,dtype=tf.int32)
        sampleINDX = tf.tensor_scatter_nd_update(sampleINDX,tf.expand_dims(indices,axis=1),updates)

        sampleCount = tf.Variable(24,dtype=tf.int32)

        updates    = tf.range(start=Ib,dtype=tf.int32)
        indices    = tf.range(start=sampleCount,updates)

        sampleCount.assign_add(updates.shape[0])
           
        remainingSampleCount = tf.math.subtract(tf.constant(inputDim,sampleCount)

        indices    = tf.range(start=sampleCount,limit=inputDim,dtype=tf.int32)
        updates    = tf.range(start=Ie,limit=Ie+remainingSampleCount,updates)

        X = tf.gather(inputData,tf.math.floormod(sampleINDX,totalSampleCount),axis=0)

        return X,y

    print('')
    for i in range(10):
        X,y = fetchData(i,outputData[i])
        print('trainINDX = %3d,X.shape = [ %3d,%d ]' % (i,X.shape[0],X.shape[1]))
    print('')

    trainData = tf.data.Dataset.from_tensor_slices((trainINDX,outputData[trainINDX]))
    trainData = trainData.shuffle(buffer_size=trainINDX.size,reshuffle_each_iteration=True) 
    trainData = trainData.map(fetchData,num_parallel_calls=tf.data.experimental.AUTOTUNE)
    trainData = trainData.repeat()
    trainData = trainData.batch(batchSize,drop_remainder=True)
    trainData = trainData.prefetch(buffer_size=tf.data.experimental.AUTOTUNE)

    validationData = tf.data.Dataset.from_tensor_slices((validationINDX,outputData[validationINDX]))
    validationData = validationData.map(fetchData,num_parallel_calls=tf.data.experimental.AUTOTUNE)
    validationData = validationData.batch(batchSize,drop_remainder=False)
    validationData = validationData.prefetch(buffer_size=tf.data.experimental.AUTOTUNE)

    history = model.fit(x=trainData,steps_per_epoch=stepsPerEpoch,validation_data=validationData,verbose=1,epochs=epochCount)

if __name__== "__main__":
  main( )

解决所描述问题的任何帮助将不胜感激。预先谢谢你!

解决方法

似乎将“ updates.shape [0]”替换为“ tf.shape(updates)[0]”可以解决该特定问题。但是,这导致了下面描述的另一个问题。

import numpy as np
import tensorflow as tf

def main( ):

    inputData      = np.zeros((1000,3),dtype=np.float32)
    inputData[:,0] = np.sin(np.arange(1000)/360)
    inputData[:,1] = np.cos(np.arange(1000)/360)
    inputData[:,2] = np.sin(np.arange(1000)/360) * np.cos(np.arange(1000)/360) # Generate some input data
    inputData      = tf.convert_to_tensor(inputData,dtype=tf.float32)

    outputData     = np.random.randint(low=0,high=3,size=1000,dtype=np.int32) # Generate random output data

    eventData      = np.zeros((1000,2),dtype=np.int32)
    eventData[:,0] = np.arange(1000)                                                                  # Begin index of sparse sampling
    eventData[:,1] = np.arange(1000) + np.random.randint(low=80,high=121,dtype=np.int32) # End   index of sparse sampling
    eventData      = tf.convert_to_tensor(eventData,dtype=tf.int32)

    totalSampleCount = int(1000)
    eventCount       = int(1000)
    inputDim         = int(300)
    outputDim        = int(3)
    batchSize        = int(256)
    epochCount       = int(5)
    stepsPerEpoch    = int(np.floor(666/batchSize))

    trainINDX      = np.arange(666)
    validationINDX = np.arange(666,1000)

    model = tf.keras.Sequential([
        tf.keras.layers.Conv1D(filters=15,kernel_size=15,strides=1,padding='same',dilation_rate=1,activation='relu',use_bias=True,kernel_initializer='glorot_uniform',bias_initializer='zeros',input_shape=(inputDim,3)),tf.keras.layers.MaxPooling1D(pool_size=5,strides=None,padding='valid'),tf.keras.layers.Flatten(),tf.keras.layers.Dense(256,use_bias=True),tf.keras.layers.Dropout(0.50),tf.keras.layers.Dense(outputDim,activation='softmax')
    ])

    model.compile( optimizer=tf.keras.optimizers.Adam(learning_rate=1e-3,beta_1=0.9,beta_2=0.999,amsgrad=True),loss=tf.keras.losses.SparseCategoricalCrossentropy(),metrics=[tf.keras.metrics.SparseCategoricalAccuracy()])

    def fetchData(eventINDX,y): # This function picks up the requested elements from inputData

        sampleINDX = tf.zeros([inputDim],dtype=tf.int32) # Initialize sampleINDX to zero

        Ib = tf.gather_nd(eventData,[tf.cast(eventINDX,dtype=tf.int32),0]) # Begin of sparse sampling
        Ie = tf.gather_nd(eventData,1]) # End   of sparse sampling

        indices    = tf.range(start=0,limit=24,dtype=tf.int32)
        updates    = tf.range(start=Ib-24,limit=Ib,dtype=tf.int32)
        sampleINDX = tf.tensor_scatter_nd_update(sampleINDX,tf.expand_dims(indices,axis=1),updates)

        sampleCount = tf.Variable(24,dtype=tf.int32)

        updates    = tf.range(start=Ib,limit=Ie,delta=5,dtype=tf.int32)
        indices    = tf.range(start=sampleCount,limit=sampleCount+tf.shape(updates)[0],updates)

        sampleCount.assign_add(tf.shape(updates)[0])
           
        remainingSampleCount = tf.math.subtract(tf.constant(inputDim,sampleCount)

        indices    = tf.range(start=sampleCount,limit=inputDim,dtype=tf.int32)
        updates    = tf.range(start=Ie,limit=Ie+remainingSampleCount,updates)

        X = tf.gather(inputData,tf.math.floormod(sampleINDX,totalSampleCount),axis=0)

        return X,y

    print('')
    for i in range(10):
        X,y = fetchData(i,outputData[i])
        print('trainINDX = %3d,X.shape = [ %3d,%d ]' % (i,X.shape[0],X.shape[1]))
    print('')

    trainData = tf.data.Dataset.from_tensor_slices((trainINDX,outputData[trainINDX]))
    trainData = trainData.shuffle(buffer_size=trainINDX.size,reshuffle_each_iteration=True) 
    trainData = trainData.map(fetchData,num_parallel_calls=tf.data.experimental.AUTOTUNE)
    trainData = trainData.repeat()
    trainData = trainData.batch(batchSize,drop_remainder=True)
    trainData = trainData.prefetch(buffer_size=tf.data.experimental.AUTOTUNE)

    validationData = tf.data.Dataset.from_tensor_slices((validationINDX,outputData[validationINDX]))
    validationData = validationData.map(fetchData,num_parallel_calls=tf.data.experimental.AUTOTUNE)
    validationData = validationData.batch(batchSize,drop_remainder=False)
    validationData = validationData.prefetch(buffer_size=tf.data.experimental.AUTOTUNE)

    history = model.fit(x=trainData,steps_per_epoch=stepsPerEpoch,validation_data=validationData,verbose=1,epochs=epochCount)

if __name__== "__main__":
  main( )

现在附件的输出为

trainINDX =   0,X.shape = [ 300,3 ]
trainINDX =   1,3 ]
trainINDX =   2,3 ]
trainINDX =   3,3 ]
trainINDX =   4,3 ]
trainINDX =   5,3 ]
trainINDX =   6,3 ]
trainINDX =   7,3 ]
trainINDX =   8,3 ]
trainINDX =   9,3 ]

Epoch 1/5
Traceback (most recent call last):
  File "testData.py",line 93,in <module>
    main( )
  File "testData.py",line 90,in main
    history = model.fit(x=trainData,epochs=epochCount)
  File "/Users/relaxation82/Library/Python/3.7/lib/python/site-packages/tensorflow/python/keras/engine/training.py",line 108,in _method_wrapper
    return method(self,*args,**kwargs)
  File "/Users/relaxation82/Library/Python/3.7/lib/python/site-packages/tensorflow/python/keras/engine/training.py",line 1098,in fit
    tmp_logs = train_function(iterator)
  File "/Users/relaxation82/Library/Python/3.7/lib/python/site-packages/tensorflow/python/eager/def_function.py",line 780,in __call__
    result = self._call(*args,**kwds)
  File "/Users/relaxation82/Library/Python/3.7/lib/python/site-packages/tensorflow/python/eager/def_function.py",line 840,in _call
    return self._stateless_fn(*args,**kwds)
  File "/Users/relaxation82/Library/Python/3.7/lib/python/site-packages/tensorflow/python/eager/function.py",line 2829,in __call__
    return graph_function._filtered_call(args,kwargs)  # pylint: disable=protected-access
  File "/Users/relaxation82/Library/Python/3.7/lib/python/site-packages/tensorflow/python/eager/function.py",line 1848,in _filtered_call
    cancellation_manager=cancellation_manager)
  File "/Users/relaxation82/Library/Python/3.7/lib/python/site-packages/tensorflow/python/eager/function.py",line 1924,in _call_flat
    ctx,args,cancellation_manager=cancellation_manager))
  File "/Users/relaxation82/Library/Python/3.7/lib/python/site-packages/tensorflow/python/eager/function.py",line 550,in call
    ctx=ctx)
  File "/Users/relaxation82/Library/Python/3.7/lib/python/site-packages/tensorflow/python/eager/execute.py",line 60,in quick_execute
    inputs,attrs,num_outputs)
tensorflow.python.framework.errors_impl.FailedPreconditionError:  Error while reading resource variable _AnonymousVar24 from Container: localhost. This could mean that the variable was uninitialized. Not found: Resource localhost/_AnonymousVar24/N10tensorflow3VarE does not exist.
     [[{{node range_3/ReadVariableOp}}]]
     [[IteratorGetNext]] [Op:__inference_train_function_1569]

Function call stack:
train_function

版权声明:本文内容由互联网用户自发贡献,该文观点与技术仅代表作者本人。本站仅提供信息存储空间服务,不拥有所有权,不承担相关法律责任。如发现本站有涉嫌侵权/违法违规的内容, 请发送邮件至 dio@foxmail.com 举报,一经查实,本站将立刻删除。