如何解决输入张量<名称>以形状进入循环,但经过一轮迭代后形状为<未知>
该代码已经过测试,并且可以按期望的方式在预备模式(调试)中工作。但是,它无法立即执行。
该方法获得一个名为namedtuple
的{{1}},如下所示:
Hyp
while循环被这样调用:
Hyp = namedtuple(
'Hyp',field_names='score,yseq,encoder_state,decoder_state,decoder_output'
)
这是_,hyp = tf.while_loop(
cond=condition_,body=body_,loop_vars=(tf.constant(0,dtype=tf.int32),hyp),shape_invariants=(
tf.TensorShape([]),tf.nest.map_structure(get_shape_invariants,)
)
的相关部分:
body_
我得到的是一个def body_(i_,hypothesis_: Hyp):
# [:] Collapsed some code ..
def update_from_next_id_():
return Hyp(
# Update values ..
)
# The only place where I generate a new hypothesis_ namedtuple
hypothesis_ = tf.cond(
tf.not_equal(next_id,blank),true_fn=lambda: update_from_next_id_(),false_fn=lambda: hypothesis_
)
return i_ + 1,hypothesis_
:
ValueError
shape_invariantsValueError: Input tensor 'hypotheses:0' enters the loop with shape (),but has shape <unkNown> after one iteration. To allow the shape to vary across iterations,use the
这里可能是什么问题?
以下是为我要序列化的 argument of tf.while_loop to specify a less-specific shape.
定义input_signature
的方式。
在这里,tf.function
是实际的实现-我知道这里有点丑陋,但self.greedy_decode_impl
是我所说的。
self.greedy_decode
self.greedy_decode = tf.function(
self.greedy_decode_impl,input_signature=(
tf.TensorSpec([1,None,self.config.encoder.lstm_units],dtype=tf.float32),Hyp(
score=tf.TensorSpec([],yseq=tf.TensorSpec([1,None],encoder_state=tuple(
(tf.TensorSpec([1,lstm.units],tf.TensorSpec([1,dtype=tf.float32))
for (lstm,_) in self.encoder_network.lstm_stack
),decoder_state=tuple(
(tf.TensorSpec([1,_) in self.predict_network.lstm_stack
),decoder_output=tf.TensorSpec([1,self.config.decoder.lstm_units],dtype=tf.float32)
),)
)
的实现:
greedy_decode_impl
为什么它在急切模式下起作用,而不在非急切模式下起作用?
根据tf.while_loop
的文档,def greedy_decode_impl(self,encoder_outputs: tf.Tensor,hypotheses: Hyp,blank=0) -> Hyp:
hyp = hypotheses
encoder_outputs = encoder_outputs[0]
def condition_(i_,*_):
time_steps = tf.shape(encoder_outputs)[0]
return tf.less(i_,time_steps)
def body_(i_,hypothesis_: Hyp):
encoder_output_ = tf.reshape(encoder_outputs[i_],shape=(1,1,-1))
join_out = self.join_network((encoder_output_,hypothesis_.decoder_output),training=False)
logits = tf.squeeze(tf.nn.log_softmax(tf.squeeze(join_out)))
next_id = tf.argmax(logits,output_type=tf.int32)
log_prob = logits[next_id]
next_id = tf.reshape(next_id,(1,1))
def update_from_next_id_():
decoder_output_,decoder_state_ = self.predict_network(
next_id,memory_states=hypothesis_.decoder_state,training=False
)
return Hyp(
score=hypothesis_.score + log_prob,yseq=tf.concat([hypothesis_.yseq,next_id],axis=0),decoder_state=decoder_state_,decoder_output=decoder_output_,encoder_state=hypothesis_.encoder_state
)
hypothesis_ = tf.cond(
tf.not_equal(next_id,false_fn=lambda: hypothesis_
)
return i_ + 1,hypothesis_
_,hyp = tf.while_loop(
cond=condition_,shape_invariants=(
tf.TensorShape([]),)
)
return hyp
应该可以使用。
斐波那契示例
为了检查它是否应与namedtuple
一起使用,我已经使用类似的机制实现了斐波那契数列。为了包括条件,循环在到达步骤namedtuple
时停止追加新数字:
正如我们在下面看到的那样,该方法应该在没有Python副作用的情况下工作。
n // 2
输出:
from collections import namedtuple
import tensorflow as tf
FibonacciStep = namedtuple('FibonacciStep',field_names='seq,prev_value')
def shape_list(x):
static = x.shape.as_list()
dynamic = tf.shape(x)
return [dynamic[i] if s is None else s for i,s in enumerate(static)]
def get_shape_invariants(tensor):
shapes = shape_list(tensor)
return tf.TensorShape([i if isinstance(i,int) else None for i in shapes])
def save_tflite(fp,concrete_fn):
converter = tf.lite.TFLiteConverter.from_concrete_functions([concrete_fn])
converter.experimental_new_converter = True
converter.target_spec.supported_ops = [tf.lite.OpsSet.TFLITE_BUILTINS,tf.lite.OpsSet.SELECT_TF_OPS]
converter.optimizations = []
tflite_model = converter.convert()
with tf.io.gfile.GFile(fp,'wb') as f:
f.write(tflite_model)
@tf.function(
input_signature=(
tf.TensorSpec([],FibonacciStep(
seq=tf.TensorSpec([1,prev_value=tf.TensorSpec([],)
)
)
def fibonacci(n: tf.Tensor,fibo: FibonacciStep):
def cond_(i_,*args):
return tf.less(i_,n)
def body_(i_,fibo_: FibonacciStep):
prev_value = fibo_.seq[0,-1] + fibo_.prev_value
def append_value():
return FibonacciStep(
seq=tf.concat([fibo_.seq,tf.reshape(prev_value,1))],axis=-1),prev_value=fibo_.seq[0,-1]
)
fibo_ = tf.cond(
tf.less_equal(i_,n // 2),true_fn=lambda: append_value(),false_fn=lambda: fibo_
)
return i_ + 1,fibo_
_,fibo = tf.while_loop(
cond=cond_,loop_vars=(0,fibo),)
)
return fibo
def main():
n = tf.constant(10,dtype=tf.int32)
fibo = FibonacciStep(
seq=tf.constant([[0,1]],prev_value=tf.constant(0,)
fibo = fibonacci(n,fibo=fibo)
fibo = fibonacci(n + 10,fibo=fibo)
fp = '/tmp/fibonacci.tflite'
concrete_fn = fibonacci.get_concrete_function()
save_tflite(fp,concrete_fn)
print(fibo.seq.numpy()[0].tolist())
print('All done.')
if __name__ == '__main__':
main()
解决方法
好吧,事实证明
tf.concat([hypothesis_.yseq,next_id],axis=0),
应该是
tf.concat([hypothesis_.yseq,axis=-1),
说句公道话,错误消息种类会提示您在哪里查看,但是“有用”的描述太多了。我通过在错误的轴上串联而违反了TensorSpec
,但Tensorflow尚不能直接指向受影响的Tensor(尚未)。
版权声明:本文内容由互联网用户自发贡献,该文观点与技术仅代表作者本人。本站仅提供信息存储空间服务,不拥有所有权,不承担相关法律责任。如发现本站有涉嫌侵权/违法违规的内容, 请发送邮件至 dio@foxmail.com 举报,一经查实,本站将立刻删除。