即使在增加模型尺寸后我的训练损失也没有减少

问题描述 投票:0回答:0

即使在增加张量流模型大小后,我也无法减少训练损失。有人可以帮助我知道我可以应用哪些技术来减少训练损失。

我在预处理中做了以下事情:-

  1. 重采样
  2. 缩放

下面是我的代码:-

import os
os.chdir(os.path.dirname(os.path.abspath(__file__)))
import pandas as pd
import traceback
import numpy as np
from sklearn.preprocessing import StandardScaler
from pickle import load, dump
import tensorflow as tf
from imblearn.under_sampling import RandomUnderSampler
from tensorflow.keras.layers import LSTM, Dense
from sklearn.metrics import precision_recall_curve
import matplotlib.pyplot as plt
from keras.layers import Conv1D, BatchNormalization, GlobalAveragePooling1D, Permute, Dropout
from keras.layers import Input, Bidirectional, CuDNNLSTM, concatenate, Activation
from keras.models import Model, load_model
from tensorflow.keras.callbacks import ModelCheckpoint

ip = Input(shape=(X_train.shape[1:]))
y = Conv1D(size, 8, padding='same', kernel_initializer='he_uniform')(ip)
y = BatchNormalization()(y)
y = Activation('relu')(y)
y = Conv1D(size*2, 5, padding='same', kernel_initializer='he_uniform')(y)
y = BatchNormalization()(y)
y = Activation('relu')(y)
y = Conv1D(size, 3, padding='same', kernel_initializer='he_uniform')(y)
y = BatchNormalization()(y)
y = Activation('relu')(y)
y = GlobalAveragePooling1D()(y)
out = Dense(1, activation='sigmoid')(y)
model = Model(ip, out)
model.compile(optimizer='adam',
         loss=tf.keras.losses.BinaryCrossentropy())
es = tf.keras.callbacks.EarlyStopping(monitor='loss', patience=3)
mc = ModelCheckpoint("lstm_cnn_resample_"+str(size)+"_"+str(idx)+".h5", monitor='val_loss', mode='min', save_best_only=True)
model.fit(X_train, y_train, epochs=ep, batch_size=256, validation_data=(X_test, y_test),callbacks=[mc, es])

以下是我在训练和测试中得到的输出。

15355/15355 [==============================] - 253s 16ms/step - loss: 0.5464 - val_loss: 0.5497

Epoch 2/20
15355/15355 [==============================] - 247s 16ms/step - loss: 0.5403 - val_loss: 0.5493

Epoch 3/20
15355/15355 [==============================] - 244s 16ms/step - loss: 0.5389 - val_loss: 0.5497

Epoch 4/20
15355/15355 [==============================] - 253s 16ms/step - loss: 0.5380 - val_loss: 0.5372

Epoch 5/20
15355/15355 [==============================] - 260s 17ms/step - loss: 0.5374 - val_loss: 0.5642

Epoch 6/20
15355/15355 [==============================] - 290s 19ms/step - loss: 0.5371 - val_loss: 0.6522

Epoch 7/20
15355/15355 [==============================] - 278s 18ms/step - loss: 0.5367 - val_loss: 0.5453

Epoch 8/20
15355/15355 [==============================] - 256s 17ms/step - loss: 0.5363 - val_loss: 0.5584

Epoch 9/20
15355/15355 [==============================] - 273s 18ms/step - loss: 0.5361 - val_loss: 0.5441

Epoch 10/20
15355/15355 [==============================] - 248s 16ms/step - loss: 0.5359 - val_loss: 0.5423

Epoch 11/20
15355/15355 [==============================] - 255s 17ms/step - loss: 0.5357 - val_loss: 0.5490

Epoch 12/20
15355/15355 [==============================] - 261s 17ms/step - loss: 0.5355 - val_loss: 0.5484

Epoch 13/20
15355/15355 [==============================] - 268s 17ms/step - loss: 0.5354 - val_loss: 0.5584

Epoch 14/20
15355/15355 [==============================] - 261s 17ms/step - loss: 0.5353 - val_loss: 0.5282

Epoch 15/20
15355/15355 [==============================] - 265s 17ms/step - loss: 0.5351 - val_loss: 0.5589

Epoch 16/20
15355/15355 [==============================] - 228s 15ms/step - loss: 0.5350 - val_loss: 0.5482

Epoch 17/20
15355/15355 [==============================] - 256s 17ms/step - loss: 0.5349 - val_loss: 0.5442
python tensorflow deep-learning
© www.soinside.com 2019 - 2024. All rights reserved.