1

こちらのクイックスタートガイドに従っています。問題は、彼らが GPU マシン用のコードを提供していて、CPU ベースの Ubuntu マシンでコードを実行していることです。すべてをCUDAに入れる行にコメントしました。コードにエラーが表示され、解決方法がわかりません。問題は、「これを機能させるにはどうすればよいか」です。

この回答を確認しましたが、これは私が探しているものではありません。

完全なコードはこちら

1. BertModel を使用して入力を非表示状態にエンコードする:
#Load pre-trained model (weights)
model = BertModel.from_pretrained('bert-base-uncased')

#Set the model in evaluation mode to desactivate the DropOut modules
# This is IMPORTANT to have reproducible results during evaluation!
model.eval()

#***I have commented these 3 lines*** 

# If you have a GPU, put everything on cuda
#tokens_tensor = tokens_tensor.to('cuda')
#segments_tensors = segments_tensors.to('cuda')
#model.to('cuda')

#Rest all is untouched
# *** -----------------***---------------***

# Predict hidden states features for each layer
with torch.no_grad():
    # See the models docstrings for the detail of the inputs
    outputs = model(tokens_tensor, token_type_ids=segments_tensors)
 # PyTorch-Transformers models always output tuples.
 # See the models docstrings for the detail of all the outputs
 # In our case, the first element is the hidden state of the last layer of the Bert model
    encoded_layers = outputs[0]
# We have encoded our input sequence in a FloatTensor of shape (batch size, sequence length, model hidden dimension)
assert tuple(encoded_layers.shape) == (1, len(indexed_tokens), model.config.hidden_size)

1 のエラー:

---------------------------------------------------------------------------
AttributeError                            Traceback (most recent call last)
<ipython-input-40-a86e9643e7f3> in <module>
     11 
     12 # We have encoded our input sequence in a FloatTensor of shape (batch size, sequence length, model hidden dimension)
---> 13 assert tuple(encoded_layers).shape == (1, len(indexed_tokens), model.config.hidden_size)

AttributeError: 'tuple' object has no attribute 'shape'

2. BertForMaskedLM を使用して、マスクされたトークンを予測します。

# Load pre-trained model (weights)
model = BertForMaskedLM.from_pretrained('bert-base-uncased')
model.eval()

#***---------------Commented--------------------------
# If you have a GPU, put everything on cuda
#tokens_tensor = tokens_tensor.to('cuda')
#segments_tensors = segments_tensors.to('cuda')
#model.to('cuda')

#***---------------------------------------------

# Predict all tokens
with torch.no_grad():
    outputs = model(tokens_tensor, token_type_ids=segments_tensors)
    predictions = outputs[0]

# confirm we were able to predict 'henson'
predicted_index = torch.argmax(predictions[0, masked_index]).item()
predicted_token = tokenizer.convert_ids_to_tokens([predicted_index])[0]
assert predicted_token == 'henson'

2 のエラー:

---------------------------------------------------------------------------
AssertionError                            Traceback (most recent call last)
<ipython-input-42-9b965490d278> in <module>
     17 predicted_index = torch.argmax(predictions[0, masked_index]).item()
     18 predicted_token = tokenizer.convert_ids_to_tokens([predicted_index])[0]
---> 19 assert predicted_token == 'henson'

AssertionError:
4

0 に答える 0