3

TensorRT と Tensorflow に関する問題で立ち往生しています。私は NVIDIA jetson nano を使用しており、単純な Tensorflow モデルを TensorRT 最適化モデルに変換しようとしています。私は tensorflow 2.1.0 と python 3.6.9 を使用しています。私はNVIDIAガイドからこのコードサンプルを利用しようとしています:

from tensorflow.python.compiler.tensorrt import trt_convert as trt
converter = trt.TrtGraphConverterV2(input_saved_model_dir=input_saved_model_dir)
converter.convert()
converter.save(output_saved_model_dir)

これをテストするために、テンソルフローのウェブサイトから簡単な例を取り上げました。モデルを TensorRT モデルに変換するには、モデルを「savedModel」として保存し、それを trt.TrtGraphConverterV2 関数にロードします。

#https://www.tensorflow.org/tutorials/quickstart/beginner

import tensorflow as tf
from tensorflow.python.compiler.tensorrt import trt_convert as trt
import os

#mnist = tf.keras.datasets.mnist

#(x_train, y_train), (x_test, y_test) = mnist.load_data()
#x_train, x_test = x_train / 255.0, x_test / 255.0

model = tf.keras.models.Sequential([
  tf.keras.layers.Flatten(input_shape=(28, 28)),
  tf.keras.layers.Dense(128, activation='relu'),
  #tf.keras.layers.Dropout(0.2),
  tf.keras.layers.Dense(10)
])

loss_fn = tf.keras.losses.SparseCategoricalCrossentropy(from_logits=True)

model.compile(optimizer='adam', loss=loss_fn, metrics=['accuracy'])


# create paths to save models
model_name = "simpleModel"
pb_model  = os.path.join(os.path.dirname(os.path.abspath(__file__)),(model_name+"_pb")) 
trt_model = os.path.join(os.path.dirname(os.path.abspath(__file__)),(model_name+"_trt")) 

if not os.path.exists(pb_model):
    os.mkdir(pb_model)

if not os.path.exists(trt_model):
    os.mkdir(trt_model)

tf.saved_model.save(model, pb_model)


# https://docs.nvidia.com/deeplearning/frameworks/tf-trt-user-guide/index.html#usage-example
print("\nconverting to trt-model")
converter = trt.TrtGraphConverterV2(input_saved_model_dir=pb_model )
print("\nconverter.convert")
converter.convert()
print("\nconverter.save")
converter.save(trt_model)

print("trt-model saved under: ",trt_model)

このコードを実行すると、trt 最適化モデルが保存されますが、モデルは使用できません。たとえば、モデルをロードして model.summary() を試すと、次のように表示されます。

Traceback (most recent call last):
  File "/home/al/Code/Benchmark_70x70/test-load-pb.py", line 45, in <module>
    model.summary()
AttributeError: '_UserObject' object has no attribute 'summary'

これは、コンバータ スクリプトの完全な出力です。

2020-04-01 20:38:07.395780: I tensorflow/stream_executor/platform/default/dso_loader.cc:44] Successfully opened dynamic library libcudart.so.10.0
2020-04-01 20:38:11.837436: I tensorflow/stream_executor/platform/default/dso_loader.cc:44] Successfully opened dynamic library libnvinfer.so.6
2020-04-01 20:38:11.879775: I tensorflow/stream_executor/platform/default/dso_loader.cc:44] Successfully opened dynamic library libnvinfer_plugin.so.6
2020-04-01 20:38:17.015440: I tensorflow/stream_executor/platform/default/dso_loader.cc:44] Successfully opened dynamic library libcuda.so.1
2020-04-01 20:38:17.054065: I tensorflow/stream_executor/cuda/cuda_gpu_executor.cc:948] ARM64 does not support NUMA - returning NUMA node zero
2020-04-01 20:38:17.061718: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1555] Found device 0 with properties: 
pciBusID: 0000:00:00.0 name: NVIDIA Tegra X1 computeCapability: 5.3
coreClock: 0.9216GHz coreCount: 1 deviceMemorySize: 3.87GiB deviceMemoryBandwidth: 23.84GiB/s
2020-04-01 20:38:17.061853: I tensorflow/stream_executor/platform/default/dso_loader.cc:44] Successfully opened dynamic library libcudart.so.10.0
2020-04-01 20:38:17.061989: I tensorflow/stream_executor/platform/default/dso_loader.cc:44] Successfully opened dynamic library libcublas.so.10.0
2020-04-01 20:38:17.145546: I tensorflow/stream_executor/platform/default/dso_loader.cc:44] Successfully opened dynamic library libcufft.so.10.0
2020-04-01 20:38:17.252192: I tensorflow/stream_executor/platform/default/dso_loader.cc:44] Successfully opened dynamic library libcurand.so.10.0
2020-04-01 20:38:17.368195: I tensorflow/stream_executor/platform/default/dso_loader.cc:44] Successfully opened dynamic library libcusolver.so.10.0
2020-04-01 20:38:17.433245: I tensorflow/stream_executor/platform/default/dso_loader.cc:44] Successfully opened dynamic library libcusparse.so.10.0
2020-04-01 20:38:17.433451: I tensorflow/stream_executor/platform/default/dso_loader.cc:44] Successfully opened dynamic library libcudnn.so.7
2020-04-01 20:38:17.433761: I tensorflow/stream_executor/cuda/cuda_gpu_executor.cc:948] ARM64 does not support NUMA - returning NUMA node zero
2020-04-01 20:38:17.434112: I tensorflow/stream_executor/cuda/cuda_gpu_executor.cc:948] ARM64 does not support NUMA - returning NUMA node zero
2020-04-01 20:38:17.434418: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1697] Adding visible gpu devices: 0
2020-04-01 20:38:17.483529: W tensorflow/core/platform/profile_utils/cpu_utils.cc:98] Failed to find bogomips in /proc/cpuinfo; cannot determine CPU frequency
2020-04-01 20:38:17.504302: I tensorflow/compiler/xla/service/service.cc:168] XLA service 0x13e7b0f0 initialized for platform Host (this does not guarantee that XLA will be used). Devices:
2020-04-01 20:38:17.504407: I tensorflow/compiler/xla/service/service.cc:176]   StreamExecutor device (0): Host, Default Version
2020-04-01 20:38:17.713898: I tensorflow/stream_executor/cuda/cuda_gpu_executor.cc:948] ARM64 does not support NUMA - returning NUMA node zero
2020-04-01 20:38:17.714293: I tensorflow/compiler/xla/service/service.cc:168] XLA service 0x13de1210 initialized for platform CUDA (this does not guarantee that XLA will be used). Devices:
2020-04-01 20:38:17.714758: I tensorflow/compiler/xla/service/service.cc:176]   StreamExecutor device (0): NVIDIA Tegra X1, Compute Capability 5.3
2020-04-01 20:38:17.715405: I tensorflow/stream_executor/cuda/cuda_gpu_executor.cc:948] ARM64 does not support NUMA - returning NUMA node zero
2020-04-01 20:38:17.715650: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1555] Found device 0 with properties: 
pciBusID: 0000:00:00.0 name: NVIDIA Tegra X1 computeCapability: 5.3
coreClock: 0.9216GHz coreCount: 1 deviceMemorySize: 3.87GiB deviceMemoryBandwidth: 23.84GiB/s
2020-04-01 20:38:17.715796: I tensorflow/stream_executor/platform/default/dso_loader.cc:44] Successfully opened dynamic library libcudart.so.10.0
2020-04-01 20:38:17.715941: I tensorflow/stream_executor/platform/default/dso_loader.cc:44] Successfully opened dynamic library libcublas.so.10.0
2020-04-01 20:38:17.716057: I tensorflow/stream_executor/platform/default/dso_loader.cc:44] Successfully opened dynamic library libcufft.so.10.0
2020-04-01 20:38:17.716174: I tensorflow/stream_executor/platform/default/dso_loader.cc:44] Successfully opened dynamic library libcurand.so.10.0
2020-04-01 20:38:17.716252: I tensorflow/stream_executor/platform/default/dso_loader.cc:44] Successfully opened dynamic library libcusolver.so.10.0
2020-04-01 20:38:17.716311: I tensorflow/stream_executor/platform/default/dso_loader.cc:44] Successfully opened dynamic library libcusparse.so.10.0
2020-04-01 20:38:17.716418: I tensorflow/stream_executor/platform/default/dso_loader.cc:44] Successfully opened dynamic library libcudnn.so.7
2020-04-01 20:38:17.716687: I tensorflow/stream_executor/cuda/cuda_gpu_executor.cc:948] ARM64 does not support NUMA - returning NUMA node zero
2020-04-01 20:38:17.716994: I tensorflow/stream_executor/cuda/cuda_gpu_executor.cc:948] ARM64 does not support NUMA - returning NUMA node zero
2020-04-01 20:38:17.717111: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1697] Adding visible gpu devices: 0
2020-04-01 20:38:17.736625: I tensorflow/stream_executor/platform/default/dso_loader.cc:44] Successfully opened dynamic library libcudart.so.10.0
2020-04-01 20:38:30.190208: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1096] Device interconnect StreamExecutor with strength 1 edge matrix:
2020-04-01 20:38:30.315240: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1102]      0 
2020-04-01 20:38:30.315482: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1115] 0:   N 
2020-04-01 20:38:30.832895: I tensorflow/stream_executor/cuda/cuda_gpu_executor.cc:948] ARM64 does not support NUMA - returning NUMA node zero
2020-04-01 20:38:31.002925: I tensorflow/stream_executor/cuda/cuda_gpu_executor.cc:948] ARM64 does not support NUMA - returning NUMA node zero
2020-04-01 20:38:31.005861: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1241] Created TensorFlow device (/job:localhost/replica:0/task:0/device:GPU:0 with 32 MB memory) -> physical GPU (device: 0, name: NVIDIA Tegra X1, pci bus id: 0000:00:00.0, compute capability: 5.3)
2020-04-01 20:38:34.803674: W tensorflow/python/util/util.cc:319] Sets are not currently considered sequences, but this may change in the future, so consider avoiding using them.
WARNING:tensorflow:From /usr/local/lib/python3.6/dist-packages/tensorflow_core/python/ops/resource_variable_ops.py:1786: calling BaseResourceVariable.__init__ (from tensorflow.python.ops.resource_variable_ops) with constraint is deprecated and will be removed in a future version.
Instructions for updating:
If using Keras pass *_constraint arguments to layers.

converting to trt-model
2020-04-01 20:38:37.808143: I tensorflow/stream_executor/platform/default/dso_loader.cc:44] Successfully opened dynamic library libnvinfer.so.6

converter.convert
2020-04-01 20:38:39.618691: I tensorflow/stream_executor/cuda/cuda_gpu_executor.cc:948] ARM64 does not support NUMA - returning NUMA node zero
2020-04-01 20:38:39.618842: I tensorflow/core/grappler/devices.cc:55] Number of eligible GPUs (core count >= 8, compute capability >= 0.0): 0
2020-04-01 20:38:39.619224: I tensorflow/core/grappler/clusters/single_machine.cc:356] Starting new session
2020-04-01 20:38:39.712117: I tensorflow/stream_executor/cuda/cuda_gpu_executor.cc:948] ARM64 does not support NUMA - returning NUMA node zero
2020-04-01 20:38:39.712437: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1555] Found device 0 with properties: 
pciBusID: 0000:00:00.0 name: NVIDIA Tegra X1 computeCapability: 5.3
coreClock: 0.9216GHz coreCount: 1 deviceMemorySize: 3.87GiB deviceMemoryBandwidth: 23.84GiB/s
2020-04-01 20:38:39.712594: I tensorflow/stream_executor/platform/default/dso_loader.cc:44] Successfully opened dynamic library libcudart.so.10.0
2020-04-01 20:38:39.744930: I tensorflow/stream_executor/platform/default/dso_loader.cc:44] Successfully opened dynamic library libcublas.so.10.0
2020-04-01 20:38:40.056630: I tensorflow/stream_executor/platform/default/dso_loader.cc:44] Successfully opened dynamic library libcufft.so.10.0
2020-04-01 20:38:40.153461: I tensorflow/stream_executor/platform/default/dso_loader.cc:44] Successfully opened dynamic library libcurand.so.10.0
2020-04-01 20:38:40.176047: I tensorflow/stream_executor/platform/default/dso_loader.cc:44] Successfully opened dynamic library libcusolver.so.10.0
2020-04-01 20:38:40.214052: I tensorflow/stream_executor/platform/default/dso_loader.cc:44] Successfully opened dynamic library libcusparse.so.10.0
2020-04-01 20:38:40.231552: I tensorflow/stream_executor/platform/default/dso_loader.cc:44] Successfully opened dynamic library libcudnn.so.7
2020-04-01 20:38:40.231927: I tensorflow/stream_executor/cuda/cuda_gpu_executor.cc:948] ARM64 does not support NUMA - returning NUMA node zero
2020-04-01 20:38:40.232253: I tensorflow/stream_executor/cuda/cuda_gpu_executor.cc:948] ARM64 does not support NUMA - returning NUMA node zero
2020-04-01 20:38:40.232388: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1697] Adding visible gpu devices: 0
2020-04-01 20:38:40.232538: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1096] Device interconnect StreamExecutor with strength 1 edge matrix:
2020-04-01 20:38:40.232587: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1102]      0 
2020-04-01 20:38:40.232618: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1115] 0:   N 
2020-04-01 20:38:40.232890: I tensorflow/stream_executor/cuda/cuda_gpu_executor.cc:948] ARM64 does not support NUMA - returning NUMA node zero
2020-04-01 20:38:40.233546: I tensorflow/stream_executor/cuda/cuda_gpu_executor.cc:948] ARM64 does not support NUMA - returning NUMA node zero
2020-04-01 20:38:40.233761: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1241] Created TensorFlow device (/job:localhost/replica:0/task:0/device:GPU:0 with 32 MB memory) -> physical GPU (device: 0, name: NVIDIA Tegra X1, pci bus id: 0000:00:00.0, compute capability: 5.3)
2020-04-01 20:38:40.579950: I tensorflow/core/grappler/optimizers/meta_optimizer.cc:841] Optimization results for grappler item: graph_to_optimize
2020-04-01 20:38:40.580104: I tensorflow/core/grappler/optimizers/meta_optimizer.cc:843]   function_optimizer: Graph size after: 26 nodes (19), 43 edges (36), time = 179.825ms.
2020-04-01 20:38:40.580157: I tensorflow/core/grappler/optimizers/meta_optimizer.cc:843]   function_optimizer: function_optimizer did nothing. time = 0.152ms.
2020-04-01 20:38:40.941994: I tensorflow/stream_executor/cuda/cuda_gpu_executor.cc:948] ARM64 does not support NUMA - returning NUMA node zero
2020-04-01 20:38:40.942217: I tensorflow/core/grappler/devices.cc:55] Number of eligible GPUs (core count >= 8, compute capability >= 0.0): 0
2020-04-01 20:38:40.942412: I tensorflow/core/grappler/clusters/single_machine.cc:356] Starting new session
2020-04-01 20:38:40.943756: I tensorflow/stream_executor/cuda/cuda_gpu_executor.cc:948] ARM64 does not support NUMA - returning NUMA node zero
2020-04-01 20:38:40.943916: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1555] Found device 0 with properties: 
pciBusID: 0000:00:00.0 name: NVIDIA Tegra X1 computeCapability: 5.3
coreClock: 0.9216GHz coreCount: 1 deviceMemorySize: 3.87GiB deviceMemoryBandwidth: 23.84GiB/s
2020-04-01 20:38:40.944010: I tensorflow/stream_executor/platform/default/dso_loader.cc:44] Successfully opened dynamic library libcudart.so.10.0
2020-04-01 20:38:40.944073: I tensorflow/stream_executor/platform/default/dso_loader.cc:44] Successfully opened dynamic library libcublas.so.10.0
2020-04-01 20:38:40.944148: I tensorflow/stream_executor/platform/default/dso_loader.cc:44] Successfully opened dynamic library libcufft.so.10.0
2020-04-01 20:38:40.944209: I tensorflow/stream_executor/platform/default/dso_loader.cc:44] Successfully opened dynamic library libcurand.so.10.0
2020-04-01 20:38:40.944266: I tensorflow/stream_executor/platform/default/dso_loader.cc:44] Successfully opened dynamic library libcusolver.so.10.0
2020-04-01 20:38:40.944320: I tensorflow/stream_executor/platform/default/dso_loader.cc:44] Successfully opened dynamic library libcusparse.so.10.0
2020-04-01 20:38:40.944372: I tensorflow/stream_executor/platform/default/dso_loader.cc:44] Successfully opened dynamic library libcudnn.so.7
2020-04-01 20:38:40.944572: I tensorflow/stream_executor/cuda/cuda_gpu_executor.cc:948] ARM64 does not support NUMA - returning NUMA node zero
2020-04-01 20:38:40.944816: I tensorflow/stream_executor/cuda/cuda_gpu_executor.cc:948] ARM64 does not support NUMA - returning NUMA node zero
2020-04-01 20:38:40.944911: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1697] Adding visible gpu devices: 0
2020-04-01 20:38:40.944993: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1096] Device interconnect StreamExecutor with strength 1 edge matrix:
2020-04-01 20:38:40.945031: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1102]      0 
2020-04-01 20:38:40.945059: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1115] 0:   N 
2020-04-01 20:38:40.945283: I tensorflow/stream_executor/cuda/cuda_gpu_executor.cc:948] ARM64 does not support NUMA - returning NUMA node zero
2020-04-01 20:38:40.945569: I tensorflow/stream_executor/cuda/cuda_gpu_executor.cc:948] ARM64 does not support NUMA - returning NUMA node zero
2020-04-01 20:38:40.945714: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1241] Created TensorFlow device (/job:localhost/replica:0/task:0/device:GPU:0 with 32 MB memory) -> physical GPU (device: 0, name: NVIDIA Tegra X1, pci bus id: 0000:00:00.0, compute capability: 5.3)
2020-04-01 20:38:41.037807: I tensorflow/compiler/tf2tensorrt/segment/segment.cc:460] There are 6 ops of 3 different types in the graph that are not converted to TensorRT: Identity, NoOp, Placeholder, (For more information see https://docs.nvidia.com/deeplearning/frameworks/tf-trt-user-guide/index.html#supported-ops).
2020-04-01 20:38:41.043736: I tensorflow/compiler/tf2tensorrt/convert/convert_graph.cc:636] Number of TensorRT candidate segments: 1
2020-04-01 20:38:41.046312: I tensorflow/compiler/tf2tensorrt/convert/convert_graph.cc:737] Replaced segment 0 consisting of 12 nodes by TRTEngineOp_0.
2020-04-01 20:38:41.073078: I tensorflow/core/grappler/optimizers/meta_optimizer.cc:841] Optimization results for grappler item: tf_graph
2020-04-01 20:38:41.073159: I tensorflow/core/grappler/optimizers/meta_optimizer.cc:843]   constant_folding: Graph size after: 22 nodes (-4), 35 edges (-8), time = 14.454ms.
2020-04-01 20:38:41.073188: I tensorflow/core/grappler/optimizers/meta_optimizer.cc:843]   layout: Graph size after: 22 nodes (0), 35 edges (0), time = 20.565ms.
2020-04-01 20:38:41.073214: I tensorflow/core/grappler/optimizers/meta_optimizer.cc:843]   constant_folding: Graph size after: 22 nodes (0), 35 edges (0), time = 5.644ms.
2020-04-01 20:38:41.073238: I tensorflow/core/grappler/optimizers/meta_optimizer.cc:843]   TensorRTOptimizer: Graph size after: 11 nodes (-11), 14 edges (-21), time = 28.58ms.
2020-04-01 20:38:41.073265: I tensorflow/core/grappler/optimizers/meta_optimizer.cc:843]   constant_folding: Graph size after: 11 nodes (0), 14 edges (0), time = 2.904ms.
2020-04-01 20:38:41.073289: I tensorflow/core/grappler/optimizers/meta_optimizer.cc:841] Optimization results for grappler item: TRTEngineOp_0_native_segment
2020-04-01 20:38:41.073312: I tensorflow/core/grappler/optimizers/meta_optimizer.cc:843]   constant_folding: Graph size after: 14 nodes (0), 15 edges (0), time = 2.875ms.
2020-04-01 20:38:41.073335: I tensorflow/core/grappler/optimizers/meta_optimizer.cc:843]   layout: Graph size after: 14 nodes (0), 15 edges (0), time = 2.389ms.
2020-04-01 20:38:41.073358: I tensorflow/core/grappler/optimizers/meta_optimizer.cc:843]   constant_folding: Graph size after: 14 nodes (0), 15 edges (0), time = 2.834ms.
2020-04-01 20:38:41.073382: I tensorflow/core/grappler/optimizers/meta_optimizer.cc:843]   TensorRTOptimizer: Graph size after: 14 nodes (0), 15 edges (0), time = 0.218ms.
2020-04-01 20:38:41.073405: I tensorflow/core/grappler/optimizers/meta_optimizer.cc:843]   constant_folding: Graph size after: 14 nodes (0), 15 edges (0), time = 5.268ms.

converter.save
2020-04-01 20:38:46.730260: W tensorflow/core/framework/op_kernel.cc:1655] OP_REQUIRES failed at trt_engine_resource_ops.cc:183 : Not found: Container TF-TRT does not exist. (Could not find resource: TF-TRT/TRTEngineOp_0)
trt-model saved under:  /home/al/Code/Benchmark_70x70/simpleModel_trt

4

3 に答える 3

0

返信ありがとうございます。必要なものがすべて含まれています。コンバーター スクリプトをテストするために、colab でコードを実行したところ問題なく動作したので、環境でエラーがないか確認する必要があると思います。model.summary()の問題について:ご
指摘のとおり、モデルの変換時に Keras API のメソッドが削除されているようです。新しいモデルを予測に使用するには、特にmodel.predict()メソッドが必要でした。幸いなことに、推論を実行する方法は他にもあります。あなたが投稿したものに加えて、このチュートリアルで説明されているものを見つけて使用しました。このノートに全体の例と説明をまとめました

loaded = tf.saved_model.load('./model_trt')  # loading the converted model

print("The signature keys are: ",list(loaded.signatures.keys())) 
infer = loaded.signatures["serving_default"]

im_select = 0 # choose train-image you want to classify
labeling = infer(tf.constant(train_images[im_select],dtype=float))['LastLayer']   ## Here, the Image classification happens; we need the name of the last layer we defined in the beginning


#Display result
print("Image ",im_select," is classified as a ",class_names[int(tf.argmax(labeling,axis=1))] )
plt.imshow(train_images[im_select])
于 2020-04-07T21:55:16.757 に答える
-1

変換が成功したよう
です。Keras と TensorRT の .pb ファイルの両方を使用してみました。

以下はサンプルコードです

saved_model_loaded = tf.saved_model.load(
    'path to trt converted model') # path to keras .pb or TensorRT .pb
#for layer in saved_model_loaded.keras_api.layers:

graph_func = saved_model_loaded.signatures['serving_default']
frozen_func = convert_variables_to_constants_v2(
    graph_func)

(x_train, y_train), (x_test, y_test) = mnist.load_data()
x_train, x_test = x_train / 255.0, x_test / 255.0

#convert to tensors
input_tensors = tf.cast(x_test, dtype=tf.float32)

output = frozen_func(input_tensors[:1])[0].numpy()
print(output) 

注: keras と TensorRT の両方のモデルを試しましたが、結果は同じです。

model.summary()エラーについては、モデルを変換すると.summary()などのメソッドの一部が削除されるよう ですが、tensorRT 変換されたモデルからグラフを確認したい場合は、代わりに Tensorboard を使用できます。
以下はサンプルコード

import argparse
import sys
import tensorflow as tf
%load_ext tensorboard
from tensorflow.python.platform import app
from tensorflow.python.summary import summary

def import_to_tensorboard(model_dir, log_dir):
  """View an imported protobuf model (`.pb` file) as a graph in Tensorboard.

  Args:
    model_dir: The location of the protobuf (`pb`) model to visualize
    log_dir: The location for the Tensorboard log to begin visualization from.

  Usage:
    Call this function with your model location and desired log directory.
    Launch Tensorboard by pointing it to the log directory.
    View your imported `.pb` model as a graph.
  """

  with tf.compat.v1.Session(graph=tf.Graph()) as sess:
    tf.compat.v1.saved_model.loader.load(
        sess, [tf.compat.v1.saved_model.tag_constants.SERVING], model_dir)

    pb_visual_writer = summary.FileWriter(log_dir)
    pb_visual_writer.add_graph(sess.graph)
    print("Model Imported. Visualize by running: "
          "tensorboard --logdir={}".format(log_dir))

関数を呼び出す

import_to_tensorboard('path to trt model', '/logs/')

テンソルボードを開く

%tensorboard --logdir='path to logs'

これが役立つかどうか教えてください。

于 2020-04-06T10:22:31.473 に答える