微信公众号搜"智元新知"关注
微信扫一扫可直接关注哦!

Tensorflow无法量化重塑功能

如何解决Tensorflow无法量化重塑功能

我将训练我的模型量化意识。但是,当我使用它时,tensorflow_model_optimization无法量化tf.reshape函数,并引发错误

  1. tensorflow版本:'2.4.0-dev20200903'
  2. python版本:3.6.9

代码

import os
os.environ['CUDA_VISIBLE_DEVICES'] = '3'
from tensorflow.keras.applications import VGG16
import tensorflow_model_optimization as tfmot
import tensorflow as tf
from tensorflow import keras
from tensorflow.keras import layers
quantize_model = tfmot.quantization.keras.quantize_model
inputs = keras.Input(shape=(784,))
# img_inputs = keras.Input(shape=(32,32,3))

dense = layers.Dense(64,activation="relu")
x = dense(inputs)
x = layers.Dense(64,activation="relu")(x)
outputs = layers.Dense(10)(x)
outputs = tf.reshape(outputs,[-1,2,5])
model = keras.Model(inputs=inputs,outputs=outputs,name="mnist_model")

# keras.utils.plot_model(model,"my_first_model.png")


q_aware_model = quantize_model(model)

输出

Traceback (most recent call last):

  File "<ipython-input-39-af601b78c010>",line 14,in <module>
    q_aware_model = quantize_model(model)

  File "/home/essys/.local/lib/python3.6/site-packages/tensorflow_model_optimization/python/core/quantization/keras/quantize.py",line 137,in quantize_model
    annotated_model = quantize_annotate_model(to_quantize)

  File "/home/essys/.local/lib/python3.6/site-packages/tensorflow_model_optimization/python/core/quantization/keras/quantize.py",line 210,in quantize_annotate_model
    to_annotate,input_tensors=None,clone_function=_add_quant_wrapper)
...

  File "/home/essys/anaconda3/envs/tf_gpu/lib/python3.6/site-packages/tensorflow/python/autograph/impl/api.py",line 667,in wrapper
    raise e.ag_error_Metadata.to_exception(e)

TypeError: in user code:


    TypeError: tf__call() got an unexpected keyword argument 'shape'

如果有人知道,请帮助?

解决方法

背后的原因是因为您的图层目前尚不支持QAT。如果要量化,则必须通过quantize_annotate_layer自行编写量化并通过quantize_scope传递,然后通过quantize_apply将其应用于模型,如此处所述:https://www.tensorflow.org/model_optimization/guide/quantization/training_comprehensive_guide?hl=en#quantize_custom_keras_layer

我已在here中创建了一个batch_norm_layer为例

针对QAT层的Tensorflow 2.x尚不完善,请考虑通过在运算符后添加FakeQuant来使用tf1.x。

版权声明:本文内容由互联网用户自发贡献,该文观点与技术仅代表作者本人。本站仅提供信息存储空间服务,不拥有所有权,不承担相关法律责任。如发现本站有涉嫌侵权/违法违规的内容, 请发送邮件至 dio@foxmail.com 举报,一经查实,本站将立刻删除。