1

HuggingFace nlp ライブラリの GLUE メトリックを使用して、特定の文が文法的な英語の文であるかどうかを確認しようとしています。しかし、エラーが発生し、先に進むことができずに立ち往生しています。

これまでに試したこと;

参照と予測は2つのテキスト文です

!pip install transformers
from transformers import BertTokenizer
tokenizer = BertTokenizer.from_pretrained('bert-large-uncased')
reference="Security has been beefed across the country as a 2 day nation wide curfew came into effect."
prediction="Security has been tightened across the country as a 2-day nationwide curfew came into effect."
import nlp
glue_metric = nlp.load_metric('glue',name="cola")

#Using BertTokenizer
encoded_reference=tokenizer.encode(reference, add_special_tokens=False)
encoded_prediction=tokenizer.encode(prediction, add_special_tokens=False)

glue_score = glue_metric.compute(encoded_prediction, encoded_reference)

エラーが発生しました。


ValueError                                Traceback (most recent call last)
<ipython-input-9-4c3a3ce7b583> in <module>()
----> 1 glue_score = glue_metric.compute(encoded_prediction, encoded_reference)

6 frames
/usr/local/lib/python3.6/dist-packages/nlp/metric.py in compute(self, predictions, references, timeout, **metrics_kwargs)
    198         predictions = self.data["predictions"]
    199         references = self.data["references"]
--> 200         output = self._compute(predictions=predictions, references=references, **metrics_kwargs)
    201         return output
    202 

/usr/local/lib/python3.6/dist-packages/nlp/metrics/glue/27b1bc63e520833054bd0d7a8d0bc7f6aab84cc9eed1b576e98c806f9466d302/glue.py in _compute(self, predictions, references)
    101             return pearson_and_spearman(predictions, references)
    102         elif self.config_name in ["mrpc", "qqp"]:
--> 103             return acc_and_f1(predictions, references)
    104         elif self.config_name in ["sst2", "mnli", "mnli_mismatched", "mnli_matched", "qnli", "rte", "wnli", "hans"]:
    105             return {"accuracy": simple_accuracy(predictions, references)}

/usr/local/lib/python3.6/dist-packages/nlp/metrics/glue/27b1bc63e520833054bd0d7a8d0bc7f6aab84cc9eed1b576e98c806f9466d302/glue.py in acc_and_f1(preds, labels)
     60 def acc_and_f1(preds, labels):
     61     acc = simple_accuracy(preds, labels)
---> 62     f1 = f1_score(y_true=labels, y_pred=preds)
     63     return {
     64         "accuracy": acc,

/usr/local/lib/python3.6/dist-packages/sklearn/metrics/_classification.py in f1_score(y_true, y_pred, labels, pos_label, average, sample_weight, zero_division)
   1097                        pos_label=pos_label, average=average,
   1098                        sample_weight=sample_weight,
-> 1099                        zero_division=zero_division)
   1100 
   1101 

/usr/local/lib/python3.6/dist-packages/sklearn/metrics/_classification.py in fbeta_score(y_true, y_pred, beta, labels, pos_label, average, sample_weight, zero_division)
   1224                                                  warn_for=('f-score',),
   1225                                                  sample_weight=sample_weight,
-> 1226                                                  zero_division=zero_division)
   1227     return f
   1228 

/usr/local/lib/python3.6/dist-packages/sklearn/metrics/_classification.py in precision_recall_fscore_support(y_true, y_pred, beta, labels, pos_label, average, warn_for, sample_weight, zero_division)
   1482         raise ValueError("beta should be >=0 in the F-beta score")
   1483     labels = _check_set_wise_labels(y_true, y_pred, average, labels,
-> 1484                                     pos_label)
   1485 
   1486     # Calculate tp_sum, pred_sum, true_sum ###

/usr/local/lib/python3.6/dist-packages/sklearn/metrics/_classification.py in _check_set_wise_labels(y_true, y_pred, average, labels, pos_label)
   1314             raise ValueError("Target is %s but average='binary'. Please "
   1315                              "choose another average setting, one of %r."
-> 1316                              % (y_type, average_options))
   1317     elif pos_label not in (None, 1):
   1318         warnings.warn("Note that pos_label (set to %r) is ignored when "

ValueError: Target is multiclass but average='binary'. Please choose another average setting, one of [None, 'micro', 'macro', 'weighted'].

ただし、上記と同じ回避策で「stsb」の結果 (pearson および spearmanr) を取得できます。(コーラ)のいくつかのヘルプと回避策は本当にありがたいです。ありがとうございました。

4

1 に答える 1