我正在使用 Bert tokenizer 来处理法语,我收到了这个错误,但我似乎没有解决它。如果您有建议。
Traceback (most recent call last):
File "training_cross_data_2.py", line 240, in <module>
training_data(f, root, testdir, dict_unc)
File "training_cross_data_2.py", line 107, in training_data
Xtrain_emb, mdlname = get_flaubert_layer(data)
File "training_cross_data_2.py", line 40, in get_flaubert_layer
tokenized = texte.apply((lambda x: flaubert_tokenizer.encode(x, add_special_tokens=True, max_length=512, truncation=True)))
File "/home/getalp/kelodjoe/anaconda3/envs/env/lib/python3.6/site-packages/pandas/core/series.py", line 3848, in apply
mapped = lib.map_infer(values, f, convert=convert_dtype)
File "pandas/_libs/lib.pyx", line 2329, in pandas._libs.lib.map_infer
File "training_cross_data_2.py", line 40, in <lambda>
tokenized = texte.apply((lambda x: flaubert_tokenizer.encode(x, add_special_tokens=True, max_length=512, truncation=True)))
File "/home/anaconda3/envs/env/lib/python3.6/site-packages/transformers/tokenization_utils.py", line 907, in encode
**kwargs,
File "/home/anaconda3/envs/env/lib/python3.6/site-packages/transformers/tokenization_utils.py", line 1021, in encode_plus
first_ids = get_input_ids(text)
File "/home/anaconda3/envs/env/lib/python3.6/site-packages/transformers/tokenization_utils.py", line 1003, in get_input_ids
"Input is not valid. Should be a string, a list/tuple of strings or a list/tuple of integers."
ValueError: Input is not valid. Should be a string, a list/tuple of strings or a list/tuple of integers.
我四处寻找喜欢的答案,但无论提出什么似乎都不起作用。 Texte 是数据框。
这里是代码:
def get_flaubert_layer(texte): # teste is dataframe which I take from an excel file
language_model_dir= os.path.expanduser(args.language_model_dir)
lge_size = language_model_dir[16:-1] # modify when on jean zay 27:-1
print(lge_size)
flaubert = FlaubertModel.from_pretrained(language_model_dir)
flaubert_tokenizer = FlaubertTokenizer.from_pretrained(language_model_dir)
tokenized = texte.apply((lambda x: flaubert_tokenizer.encode(x, add_special_tokens=True, max_length=512, truncation=True)))
max_len = 0
for i in tokenized.values:
if len(i) > max_len:
max_len = len(i)
padded = np.array([i + [0] * (max_len - len(i)) for i in tokenized.values])
attention_mask = np.where(padded != 0, 1, 0)
我有另一个结构相同的文件,但它正在工作,但对于这种情况,我不知道为什么会出现此错误,我应该重新下载模型吗?
文件像这样:
您可能想要更改此行:
tokenized = texte.apply((lambda x: flaubert_tokenizer.encode(x, add_special_tokens=True, max_length=512, truncation=True)))
到
tokenized = flaubert_tokenizer.encode(texte["verbatim"],
add_special_tokens=True,
max_length=512,
truncation=True)`
这有两个优点:
encode
函数。这可能会加速标记化。错误的堆栈跟踪几乎是不言自明的。
"Input is not valid. Should be a string, a list/tuple of strings or a list/tuple of integers." ValueError: Input is not valid
分词器需要一些文本或文本列表才能进行操作。
假设字段
texte
表示与您的数据集对应的 pandas
数据框,而该数据集又具有名为 verbatim
的列,对应于您要标记化的文本。
您所要做的就是将此文本列表以“列表”格式传递给标记化,并确保列表不包含空值或空文本。您将按照以下方式进行操作:
texte.dropna(subset=['verbatim'], inplace=True) #drop null text rows
tokenized = flaubert_tokenizer.encode(texte['verbatim'].tolist(),
add_special_tokens=True,
max_length=512,
truncation=True)