使用 Tensorflow 在低资源语言和葡萄牙语之间进行机器翻译的语言模型

问题描述 投票:0回答:1

我正在尝试使用 Tensorflow 训练一种语言模型,用于在资源匮乏的语言和葡萄牙语之间进行机器翻译。不幸的是,我收到以下错误:

PS C:\Users\myuser\PycharmProjects\teste> python .\tensorflow_model.py                   
2024-08-23 21:29:50.839647: I tensorflow/core/platform/cpu_feature_guard.cc:182] This TensorFlow binary is optimized to use available CPU instructions in performance-critical operations.
To enable the following instructions: SSE SSE2 SSE3 SSE4.1 SSE4.2 AVX AVX2 AVX512F AVX512_VNNI AVX512_BF16 FMA, in other operations, rebuild TensorFlow with the appropriate compiler flags.
Traceback (most recent call last):
  File ".\tensorflow_model.py", line 52, in <module>
    dataset = tf.data.Dataset.from_tensor_slices((src_tensor, tgt_tensor)).shuffle(BUFFER_SIZE)
  File "C:\Users\myuser\PycharmProjects\teste\.venv\lib\site-packages\tensorflow\python\data\ops\dataset_ops.py", line 831, in from_tensor_slices
    return from_tensor_slices_op._from_tensor_slices(tensors, name)
  File "C:\Users\myuser\PycharmProjects\teste\.venv\lib\site-packages\tensorflow\python\data\ops\from_tensor_slices_op.py", line 25, in _from_tensor_slices
    return _TensorSliceDataset(tensors, name=name)
  File "C:\Users\myuser\PycharmProjects\teste\.venv\lib\site-packages\tensorflow\python\data\ops\from_tensor_slices_op.py", line 45, in __init__
    batch_dim.assert_is_compatible_with(
  File "C:\Users\myuser\PycharmProjects\teste\.venv\lib\site-packages\tensorflow\python\framework\tensor_shape.py", line 300, in assert_is_compatible_with
    raise ValueError("Dimensions %s and %s are not compatible" %
ValueError: Dimensions 21 and 22 are not compatible

如何克服这个错误?

import tensorflow as tf
import numpy as np
import re
import os

# Clean data
def preprocess_sentence(sentence):
    sentence = sentence.lower().strip()
    sentence = re.sub(r"([?.!,¿])", r" \1 ", sentence)
    sentence = re.sub(r'[" "]+', " ", sentence)
    sentence = re.sub(r"[^a-zA-Z?.!,¿]+", " ", sentence)
    sentence = sentence.strip()
    sentence = '<start> ' + sentence + ' <end>'
    return sentence

#Function to load data
def load_data(file_path_src, file_path_tgt):
    src_sentences = open(file_path_src, 'r', encoding='utf-8').read().strip().split('\n')
    tgt_sentences = open(file_path_tgt, 'r', encoding='utf-8').read().strip().split('\n')

    src_sentences = [preprocess_sentence(sentence) for sentence in src_sentences]
    tgt_sentences = [preprocess_sentence(sentence) for sentence in tgt_sentences]

    return src_sentences, tgt_sentences

#load data
src_sentences, tgt_sentences = load_data('src_language.txt', 'portuguese.txt')

#Tokenization
src_tokenizer = tf.keras.preprocessing.text.Tokenizer(filters='')
tgt_tokenizer = tf.keras.preprocessing.text.Tokenizer(filters='')

src_tokenizer.fit_on_texts(src_sentences)
tgt_tokenizer.fit_on_texts(tgt_sentences)

src_tensor = src_tokenizer.texts_to_sequences(src_sentences)
tgt_tensor = tgt_tokenizer.texts_to_sequences(tgt_sentences)

src_tensor = tf.keras.preprocessing.sequence.pad_sequences(src_tensor, padding='post')
tgt_tensor = tf.keras.preprocessing.sequence.pad_sequences(tgt_tensor, padding='post')

BUFFER_SIZE = len(src_tensor)

#Creating the Dataset
dataset = tf.data.Dataset.from_tensor_slices((src_tensor, tgt_tensor)).shuffle(BUFFER_SIZE) 
python tensorflow machine-learning nlp
1个回答
0
投票

该错误表明您正在尝试创建具有两个张量(

src_tensor
tgt_tensor
)的数据集,但它们具有不同的形状(即,一个可以有 21 行,另一个可以有 22 行),这使得不兼容创建数据集。要解决此问题,您需要确保两个张量具有相同的长度。这可以通过截断或填充较短的张量来实现。 请参考这个要点

© www.soinside.com 2019 - 2024. All rights reserved.