微信公众号搜"智元新知"关注
微信扫一扫可直接关注哦!

英特尔优化的 Tensorflow 不支持 oneDNN

如何解决英特尔优化的 Tensorflow 不支持 oneDNN

案例 1

框架:Tensorflow 2.5.0,Intel-Tensorflow 2.5.0

环境:Google Colab

我有一个LPOT 量化的成功量化模型,该模型将在不使用 LPOT API 的情况下运行以进行推理,因此我编写了以下推理代码

with tf.compat.v1.Session() as sess:
    tf.compat.v1.saved_model.loader.load(sess,['serve'],model)
    output = sess.graph.get_tensor_by_name(output_tensor_name)
    predictions = sess.run(output,{input_tensor_name: x})
    mse = tf.reduce_mean(tf.keras.losses.mean_squared_error(y,predictions))
    print(mse.eval())

运行 predictions = sess.run(output,{input_tensor_name: x}) 行时:

---------------------------------------------------------------------------
InternalError                             Traceback (most recent call last)
/usr/local/lib/python3.7/dist-packages/tensorflow/python/client/session.py in _do_call(self,fn,*args)
   1374     try:
-> 1375       return fn(*args)
   1376     except errors.OpError as e:

7 frames
/usr/local/lib/python3.7/dist-packages/tensorflow/python/client/session.py in _run_fn(Feed_dict,fetch_list,target_list,options,run_Metadata)
   1359       return self._call_tf_sessionrun(options,Feed_dict,-> 1360                                       target_list,run_Metadata)
   1361 

/usr/local/lib/python3.7/dist-packages/tensorflow/python/client/session.py in _call_tf_sessionrun(self,run_Metadata)
   1452                                             fetch_list,-> 1453                                             run_Metadata)
   1454 

InternalError: Missing 0-th output from {{node model/layer_1/Conv2D_eightbit_requantize}}

During handling of the above exception,another exception occurred:

InternalError                             Traceback (most recent call last)
<ipython-input-6-2bddd853d111> in <module>()
      2     tf.compat.v1.saved_model.loader.load(sess,model)
      3     output = sess.graph.get_tensor_by_name(output_tensor_name)
----> 4     predictions = sess.run(output,{input_tensor_name: x[:64]}) # 64,257,60,1
      5     mse = tf.reduce_mean(tf.keras.losses.mean_squared_error(y[:64],predictions))
      6     print(mse.eval())

/usr/local/lib/python3.7/dist-packages/tensorflow/python/client/session.py in run(self,fetches,run_Metadata)
    966     try:
    967       result = self._run(None,options_ptr,--> 968                          run_Metadata_ptr)
    969       if run_Metadata:
    970         proto_data = tf_session.TF_GetBuffer(run_Metadata_ptr)

/usr/local/lib/python3.7/dist-packages/tensorflow/python/client/session.py in _run(self,handle,run_Metadata)
   1189     if final_fetches or final_targets or (handle and Feed_dict_tensor):
   1190       results = self._do_run(handle,final_targets,final_fetches,-> 1191                              Feed_dict_tensor,run_Metadata)
   1192     else:
   1193       results = []

/usr/local/lib/python3.7/dist-packages/tensorflow/python/client/session.py in _do_run(self,run_Metadata)
   1367     if handle is None:
   1368       return self._do_call(_run_fn,Feeds,targets,-> 1369                            run_Metadata)
   1370     else:
   1371       return self._do_call(_prun_fn,fetches)

/usr/local/lib/python3.7/dist-packages/tensorflow/python/client/session.py in _do_call(self,*args)
   1392                     '\nsession_config.graph_options.rewrite_options.'
   1393                     'disable_Meta_optimizer = True')
-> 1394       raise type(e)(node_def,op,message)
   1395 
   1396   def _extend_graph(self):

InternalError: Missing 0-th output from node model/layer_1/Conv2D_eightbit_requantize (defined at <ipython-input-6-2bddd853d111>:2) 

错误发生安装或未安装 Intel-Tensorflow==2.5.0,也不会在显式设置 os.environ['TF_ENABLE_ONednN_OPTS'] = '1'解决

另一方面,当我使用 Python 3.6.8 64-bit base: Conda 在 VS Code 中运行相同的代码时,它返回与案例 2 中相同的错误消息。

案例 2

框架:Tensorflow 2.4.0,Intel-Tensorflow 2.4.0

环境:Google Colab

此案例运行良好并打印出预测的 MSE 损失,但是当我卸载 Intel-Tensorflow 2.4.0 并仅使用官方 Tensorflow 运行时,同时在案例 1 中运行同一行({ {1}}):

predictions = sess.run(output,{input_tensor_name: x})

即使明确设置了 --------------------------------------------------------------------------- InvalidArgumentError Traceback (most recent call last) /usr/local/lib/python3.7/dist-packages/tensorflow/python/client/session.py in _do_call(self,run_Metadata) 1357 # Ensure any changes to the graph are reflected in the runtime. -> 1358 self._extend_graph() 1359 return self._call_tf_sessionrun(options,/usr/local/lib/python3.7/dist-packages/tensorflow/python/client/session.py in _extend_graph(self) 1397 with self._graph._session_run_lock(): # pylint: disable=protected-access -> 1398 tf_session.ExtendSession(self._session) 1399 InvalidArgumentError: No OpKernel was registered to support Op 'QuantizedMatMulWithBiasAndDequantize' used by {{node model/dense/tensordot/MatMul_eightbit_requantize}} with these attrs: [input_quant_mode="MIN_FirsT",T1=DT_QUINT8,Toutput=DT_FLOAT,T2=DT_QINT8,Tbias=DT_QINT32,transpose_a=false,transpose_b=false] Registered devices: [cpu] Registered kernels: <no registered kernels> [[model/dense/tensordot/MatMul_eightbit_requantize]] During handling of the above exception,another exception occurred: InvalidArgumentError Traceback (most recent call last) <ipython-input-6-2bddd853d111> in <module>() 2 tf.compat.v1.saved_model.loader.load(sess,message) 1395 1396 def _extend_graph(self): InvalidArgumentError: No OpKernel was registered to support Op 'QuantizedMatMulWithBiasAndDequantize' used by node model/dense/tensordot/MatMul_eightbit_requantize (defined at <ipython-input-6-2bddd853d111>:2) with these attrs: [input_quant_mode="MIN_FirsT",transpose_b=false] Registered devices: [cpu] Registered kernels: <no registered kernels> [[model/dense/tensordot/MatMul_eightbit_requantize]] 错误仍然存​​在。

结论

我相信这两种情况都是由相同类型的错误引起的,即没有注册 OpKernel 来支持 Op ...

我了解到,安装官方 os.environ['TF_ENABLE_ONednN_OPTS'] = '1' 并设置环境变量 Tensorflow v2.5 (reference) 后,量化模型应该在支持 onednN 的情况下运行。但是在 v2.4 和 v2.5 中似乎都不是这种情况。

我的问题是如何在无需安装 TF_ENABLE_ONednN_OPTS=1 的情况下拥有支持 onednN 的官方 Tensorflow 2.5 环境?或者为什么 Intel-Tensorflow 不起作用?谢谢。

解决方法

LPOT 在英特尔® AI Analytics Toolkit 中发布,可与英特尔 TensorFlow 优化配合使用。 LPOT 可以在任何 Intel CPU 上运行以量化 AI 模型。 Intel Optimized TensorFlow 2.5.0 需要在运行 LPOT 量化或部署量化模型之前设置环境变量 TF_ENABLE_MKL_NATIVE_FORMAT=0

有关详细信息,请参阅 this

能否请您检查一下您是否在 Tensorflow 2.4 中量化了模型并在 Tensorflow 2.5 上运行推理?模型未在 Tensorflow 2.5 中运行而在 Tensorflow 2.4 中运行的合理解释是,支持 Tensorflow 2.5 的算子可能不支持在 Tensorflow 2.4 中创建的模型。

版权声明:本文内容由互联网用户自发贡献,该文观点与技术仅代表作者本人。本站仅提供信息存储空间服务,不拥有所有权,不承担相关法律责任。如发现本站有涉嫌侵权/违法违规的内容, 请发送邮件至 dio@foxmail.com 举报,一经查实,本站将立刻删除。