0

Node.JS プロジェクトに GPT-2 モデルをロードしようとしています。これは tfjs ライブラリを使用して実行できると思います。そこでGPT-2モデルをtfjsモデルに変換してみました。この回答に関する推奨事項に従って、GPT-2 モデルを SavedModel としてエクスポートしました。

!python3 -m pip install -q git+https://github.com/huggingface/transformers.git
!python3 -m pip install tensorflow tensorflowjs

次に、次のコードを実行して、SavedModel xx.pb ファイルをエクスポートしました。

from transformers import TFGPT2LMHeadModel, GPT2Tokenizer
import tensorflowjs
tokenizer = GPT2Tokenizer.from_pretrained("gpt2")
# add the EOS token as PAD token to avoid warnings
model = TFGPT2LMHeadModel.from_pretrained("gpt2", pad_token_id=tokenizer.eos_token_id)
model.save("./test_gpt2")

次に、このコマンドを実行して、SavedModel を tfjs 互換ファイルに変換しました。

!tensorflowjs_converter \
    --input_format=tf_saved_model \
    --output_node_names='gpt2' \
    --saved_model_tags=serve \
    /content/test_gpt2 \
    /content/test_gpt2_web_model

これにより、エラーが発生します

2020-07-08 16:36:11.455383: I tensorflow/core/platform/cpu_feature_guard.cc:143] Your CPU supports instructions that this TensorFlow binary was not compiled to use: AVX2 FMA
2020-07-08 16:36:11.459979: I tensorflow/core/platform/profile_utils/cpu_utils.cc:102] CPU Frequency: 2300000000 Hz
2020-07-08 16:36:11.460216: I tensorflow/compiler/xla/service/service.cc:168] XLA service 0x2e5b100 initialized for platform Host (this does not guarantee that XLA will be used). Devices:
2020-07-08 16:36:11.460284: I tensorflow/compiler/xla/service/service.cc:176]   StreamExecutor device (0): Host, Default Version
2020-07-08 16:36:18.337463: I tensorflow/core/grappler/devices.cc:60] Number of eligible GPUs (core count >= 8, compute capability >= 0.0): 0 (Note: TensorFlow was not compiled with CUDA support)
2020-07-08 16:36:18.337631: I tensorflow/core/grappler/clusters/single_machine.cc:356] Starting new session
2020-07-08 16:36:18.536301: I tensorflow/core/grappler/optimizers/meta_optimizer.cc:797] Optimization results for grappler item: graph_to_optimize
2020-07-08 16:36:18.536373: I tensorflow/core/grappler/optimizers/meta_optimizer.cc:799]   function_optimizer: Graph size after: 163 nodes (0), 175 edges (0), time = 43.871ms.
2020-07-08 16:36:18.536384: I tensorflow/core/grappler/optimizers/meta_optimizer.cc:799]   function_optimizer: Graph size after: 163 nodes (0), 175 edges (0), time = 50.779ms.
2020-07-08 16:36:18.536393: I tensorflow/core/grappler/optimizers/meta_optimizer.cc:797] Optimization results for grappler item: __inference__wrapped_model_24863
2020-07-08 16:36:18.536402: I tensorflow/core/grappler/optimizers/meta_optimizer.cc:799]   function_optimizer: function_optimizer did nothing. time = 0.004ms.
2020-07-08 16:36:18.536411: I tensorflow/core/grappler/optimizers/meta_optimizer.cc:799]   function_optimizer: function_optimizer did nothing. time = 0ms.
Traceback (most recent call last):
  File "/usr/local/bin/tensorflowjs_converter", line 8, in <module>
    sys.exit(pip_main())
  File "/usr/local/lib/python3.6/dist-packages/tensorflowjs/converters/converter.py", line 735, in pip_main
    main([' '.join(sys.argv[1:])])
  File "/usr/local/lib/python3.6/dist-packages/tensorflowjs/converters/converter.py", line 739, in main
    convert(argv[0].split(' '))
  File "/usr/local/lib/python3.6/dist-packages/tensorflowjs/converters/converter.py", line 681, in convert
    control_flow_v2=args.control_flow_v2)
  File "/usr/local/lib/python3.6/dist-packages/tensorflowjs/converters/tf_saved_model_conversion_v2.py", line 494, in convert_tf_saved_model
    weight_shard_size_bytes=weight_shard_size_bytes)
  File "/usr/local/lib/python3.6/dist-packages/tensorflowjs/converters/tf_saved_model_conversion_v2.py", line 143, in optimize_graph
    ', '.join(unsupported))
ValueError: Unsupported Ops in the model before optimization
StatefulPartitionedCall

StatefulPartitionedCallサポートされていないと書かれています。これを解決できる方法はありますか?

4

1 に答える 1