出现类型错误:需要 int32,却没有获得“NoneType”类型

问题描述 投票:0回答:2

我已经实现了带有注意层的序列到序列模型,如果我有 300000 个数据点,我不会收到任何错误,如果我使用所有数据点,我会得到以下错误 model.fit

TypeError: Expected int32, got None of type 'NoneType' instead.

enter image description here

这是什么原因?

model.fit之前的代码是

class encoder_decoder(tf.keras.Model):
  def __init__(self,embedding_size,encoder_inputs_length,output_length,vocab_size,output_vocab_size,score_fun,units):
    super(encoder_decoder,self).__init__()
    self.vocab_size = vocab_size
    self.enc_units = units
    self.embedding_size = embedding_size
    self.encoder_inputs_length = encoder_inputs_length
    self.output_length = output_length
    self.lstm_output = 0
    self.state_h = 0
    self.state_c = 0
    self.output_vocab_size = output_vocab_size
    self.dec_units = units
    self.score_fun = score_fun
    self.att_units = units
    self.encoder=Encoder(self.vocab_size,self.embedding_size,self.enc_units,self.encoder_inputs_length)
    self.decoder = Decoder(self.output_vocab_size, self.embedding_size, self.output_length, self.dec_units ,self.score_fun ,self.att_units)
    # self.dense = Dense(self.output_vocab_size,activation = "softmax")
  
  def call(self,data):
    input,output = data[0],data[1]
    encoder_hidden = self.encoder.initialize_states(input.shape[0])
    encoder_output,encoder_hidden,encoder_cell = self.encoder(input,encoder_hidden)
    decoder_hidden = encoder_hidden
    decoder_cell =encoder_cell
    decoder_output = self.decoder(output,encoder_output,decoder_hidden,decoder_cell)
    return decoder_output

在调用函数中,我正在初始化编码器的状态 使用以下代码行输入的行数

 encoder_hidden = self.encoder.initialize_states(input.shape[0])

如果我打印输入,我得到的形状为 (None,55) 这就是我收到此错误的原因。 这里我的数据点总数是 330614 当我使用我得到的所有数据时 错误,当我仅使用 330000 个数据点时,我收到此错误, 如果我在 def 方法中打印批处理,我得到的形状为 (64,55)

请找到我的以下代码,用于为我的序列到序列模型创建数据集

重新处理数据的函数和创建数据集的函数 和一个加载数据集的函数

def preprocess_sentence(w):
  # w = unicode_to_ascii(w.lower().strip())
  w = re.sub(r"([?.!,¿])", r" \1 ", w)
  w = re.sub(r'[" "]+', " ", w)
  w = re.sub(r"[^a-zA-Z?.!,¿]+", " ", w)
  w = w.strip()
  w = '<start> ' + w + ' <end>'
  return w  
def create_dataset(path, num_examples):
  lines = io.open(path, encoding='UTF-8').read().strip().split('\n')
  # lines1 = lines[330000:]
  # lines = lines[0:323386]+lines1

  word_pairs = [[preprocess_sentence(w) for w in l.split('\t')]  for l in lines[:num_examples]]
  word_pairs = [[i[0],i[1]] for i in word_pairs]
  return zip(*word_pairs)

def tokenize(lang):
  lang_tokenizer = tf.keras.preprocessing.text.Tokenizer(
      filters='')
  lang_tokenizer.fit_on_texts(lang)

  tensor = lang_tokenizer.texts_to_sequences(lang)

  tensor = tf.keras.preprocessing.sequence.pad_sequences(tensor,padding='post')
  return tensor, lang_tokenizer

def load_dataset(path, num_examples=None):
  # creating cleaned input, output pairs
  targ_lang, inp_lang = create_dataset(path, num_examples)

  input_tensor, inp_lang_tokenizer = tokenize(inp_lang)
  target_tensor, targ_lang_tokenizer = tokenize(targ_lang)

  return input_tensor, target_tensor, inp_lang_tokenizer, targ_lang_tokenizer,targ_lang,inp_lang

# Try experimenting with the size of that dataset
num_examples = None
input_tensor, target_tensor, inp_lang, targ_lang,targ_lang_text,inp_lang_text = load_dataset(path, num_examples)

# Calculate max_length of the target tensors
max_length_targ, max_length_inp = target_tensor.shape[1], input_tensor.shape[1]
max_length_targ,max_length_inp

input_tensor_train, input_tensor_val, target_tensor_train, target_tensor_val = train_test_split(input_tensor, target_tensor, test_size=0.2)

数据集的形状如下

shape of input train  (269291, 55)
shape of target train  (269291, 53)
shape of input test (67323, 55)
shape of target test (67323, 53)
python-3.x tensorflow keras
2个回答
0
投票

您可以在 model.fit 之前共享代码块。

NoneType 错误表示传递给模型的最终数组由于某种原因为空。您可以在前面的步骤中添加打印语句,以了解数组在哪里变空。

将该场景与您获取所有数据点的情况进行比较,以便您可以了解数组在哪里发生变化以及在将其传递给 model.fit 之前如何处理它。


0
投票

更改此行
编码器隐藏 = self.encoder.initialize_states(input.shape[0]) ->
编码器隐藏= self.encoder.initialize_states(tf.shape(输入)[0])

© www.soinside.com 2019 - 2024. All rights reserved.