我正在尝试使用LSTM单元构建多对一RNN,以对Twitter情绪分析进行分类。尝试拟合我的模型后,我得到一个价值错误,说。我的猜测是这是由于我对输入进行标记的方式,但是我不太确定符号张量是什么意思:
If your data is in the form of symbolic tensors, you should specify the `steps_per_epoch` argument (instead of the `batch_size` argument because symbolic tensors are expected to produce batches of input data)."
这是什么意思,我可以采取什么补救措施?
# Tokenize the input
#creates tokenizer
tokenizer = Tokenizer()
#fits the input to the text, ie most common words being closer to 0 and more obscure being father away
tokenizer.fit_on_texts(X_training)
#converts the input to token indices
X_training_tokens = tokenizer.texts_to_sequences(X_training)
#get largest list of words
maxLen = max([len(s.split()) for s in X_data])
#padding so all inputs are the same size
X_train_pad = pad_sequences(X_training_tokens, maxlen = maxLen)
#time to make the embedding matrix
#instantiate embedding matrix of zeroes
embedding_matrix = np.zeros((len(tokenizer.word_index)+1, dims))
#go through each word in the token list
for word, i in tokenizer.word_index.items():
#get the corresponding embedding vector (if it exists)
embedding_vector = embeddings.get(word)
#check if its not none
if embedding_vector is not None:
#add that to the embedding matrix
embedding_matrix[i] = embedding_vector
#Make the model
Model = Sequential()
Model.add(
Embedding(
input_dim = len(tokenizer.word_index) + 1,
output_dim = dims,
weights = [embedding_matrix],
input_length = maxLen,
trainable = False
)
)
Model.add(
LSTM(
units = maxLen,
return_sequences = False
#possibly add dropout
)
)
Model.add(
Dense(
maxLen,
activation = 'relu'
)
)
Model.add(
Dense(
3,
activation = 'softmax'
)
)
Model.compile(
optimizer = 'Adam',
loss = 'categorical_crossentropy',
metrics = ['accuracy']
)
costs = Model.fit(
x = X_train_pad,
y = Y_training,
batch_size = 2048,
epochs = 10
)
由于我使用的是TensorFlow一键式函数,因此我的Y处于符号张量中,我只是使用Keras进行分类函数,并且能够获得一个有效的NumPy数组。