减少 epoch 的损失,但更改前多个 epoch 的准确率保持不变

问题描述 投票:0回答:1

我正在建立一个神经网络模型,根据切割后给定距离的切割力来识别刀片锋利度。我的数据采用 csv 格式,并且使用带有 2 个隐藏层的二元分类模型。我只有 45 个输入数据点。当我运行我的神经网络模型时,损失正在减少,但在更改之前的多个时期内,准确性保持不变。

#Initialising the neural network
Classifier = Sequential()

#Adding the input layer and the first hidden layer
Classifier.add(Dense(units=2, kernel_initializer= 'he_uniform', activation= 'relu',input_dim = 2))

Classifier.add(Dense(units=2, kernel_initializer= 'he_uniform', activation= 'relu',))


#Adding the output layer
Classifier.add(Dense(units =1, kernel_initializer='glorot_uniform', activation = 'sigmoid',))


Classifier.summary()
Epoch 177/2000
1/1 [==============================] - 0s 98ms/step - loss: 0.5921 - accuracy: 0.7222 - val_loss: 0.6642 - val_accuracy: 0.5000
Epoch 178/2000
1/1 [==============================] - 0s 72ms/step - loss: 0.5915 - accuracy: 0.7222 - val_loss: 0.6627 - val_accuracy: 0.5000
Epoch 179/2000
1/1 [==============================] - 0s 83ms/step - loss: 0.5908 - accuracy: 0.7222 - val_loss: 0.6612 - val_accuracy: 0.5000
Epoch 180/2000
1/1 [==============================] - 0s 82ms/step - loss: 0.5902 - accuracy: 0.7222 - val_loss: 0.6597 - val_accuracy: 0.5000
Epoch 181/2000
1/1 [==============================] - 0s 123ms/step - loss: 0.5896 - accuracy: 0.7222 - val_loss: 0.6581 - val_accuracy: 0.5000
Epoch 182/2000
1/1 [==============================] - 0s 77ms/step - loss: 0.5889 - accuracy: 0.7222 - val_loss: 0.6566 - val_accuracy: 0.5000
Epoch 183/2000
1/1 [==============================] - 0s 75ms/step - loss: 0.5883 - accuracy: 0.7500 - val_loss: 0.6550 - val_accuracy: 0.5000
Epoch 184/2000
1/1 [==============================] - 0s 73ms/step - loss: 0.5877 - accuracy: 0.8056 - val_loss: 0.6533 - val_accuracy: 0.5000
Epoch 185/2000
1/1 [==============================] - 0s 83ms/step - loss: 0.5870 - accuracy: 0.8056 - val_loss: 0.6517 - val_accuracy: 0.5000
Epoch 186/2000
1/1 [==============================] - 0s 103ms/step - loss: 0.5864 - accuracy: 0.8056 - val_loss: 0.6500 - val_accuracy: 0.5000
Epoch 187/2000
1/1 [==============================] - 0s 95ms/step - loss: 0.5857 - accuracy: 0.8056 - val_loss: 0.6484 - val_accuracy: 0.5000
Epoch 188/2000
1/1 [==============================] - 0s 69ms/step - loss: 0.5851 - accuracy: 0.8056 - val_loss: 0.6467 - val_accuracy: 0.5000
Epoch 189/2000
1/1 [==============================] - 0s 84ms/step - loss: 0.5845 - accuracy: 0.8056 - val_loss: 0.6450 - val_accuracy: 0.5000
Epoch 190/2000
1/1 [==============================] - 0s 94ms/step - loss: 0.5838 - accuracy: 0.8056 - val_loss: 0.6433 - val_accuracy: 0.5000
Epoch 191/2000
1/1 [==============================] - 0s 86ms/step - loss: 0.5832 - accuracy: 0.8056 - val_loss: 0.6416 - val_accuracy: 0.5000
Epoch 192/2000
1/1 [==============================] - 0s 80ms/step - loss: 0.5825 - accuracy: 0.8056 - val_loss: 0.6399 - val_accuracy: 0.5000
Epoch 193/2000
1/1 [==============================] - 0s 63ms/step - loss: 0.5818 - accuracy: 0.8056 - val_loss: 0.6381 - val_accuracy: 0.5000
Epoch 194/2000
1/1 [==============================] - 0s 79ms/step - loss: 0.5812 - accuracy: 0.8056 - val_loss: 0.6364 - val_accuracy: 0.5000
Epoch 195/2000
1/1 [==============================] - 0s 87ms/step - loss: 0.5805 - accuracy: 0.8056 - val_loss: 0.6347 - val_accuracy: 0.5000
Epoch 196/2000
1/1 [==============================] - 0s 90ms/step - loss: 0.5799 - accuracy: 0.8056 - val_loss: 0.6330 - val_accuracy: 0.5000
Epoch 197/2000
1/1 [==============================] - 0s 83ms/step - loss: 0.5792 - accuracy: 0.8056 - val_loss: 0.6313 - val_accuracy: 0.7500
Epoch 198/2000
1/1 [==============================] - 0s 191ms/step - loss: 0.5785 - accuracy: 0.8333 - val_loss: 0.6296 - val_accuracy: 1.0000
Epoch 199/2000
1/1 [==============================] - 0s 77ms/step - loss: 0.5779 - accuracy: 0.8333 - val_loss: 0.6278 - val_accuracy: 1.0000
Epoch 200/2000
1/1 [==============================] - 0s 122ms/step - loss: 0.5772 - accuracy: 0.8333 - val_loss: 0.6261 - val_accuracy: 1.0000
Epoch 201/2000
1/1 [==============================] - 0s 98ms/step - loss: 0.5765 - accuracy: 0.8333 - val_loss: 0.6244 - val_accuracy: 1.0000
Epoch 202/2000
1/1 [==============================] - 0s 85ms/step - loss: 0.5758 - accuracy: 0.8333 - val_loss: 0.6226 - val_accuracy: 1.0000
Epoch 203/2000
1/1 [==============================] - 0s 107ms/step - loss: 0.5752 - accuracy: 0.8333 - val_loss: 0.6209 - val_accuracy: 1.0000
Epoch 204/2000
1/1 [==============================] - 0s 54ms/step - loss: 0.5745 - accuracy: 0.8333 - val_loss: 0.6192 - val_accuracy: 1.0000
Epoch 205/2000
1/1 [==============================] - 0s 67ms/step - loss: 0.5738 - accuracy: 0.8333 - val_loss: 0.6175 - val_accuracy: 1.0000
Epoch 206/2000
1/1 [==============================] - 0s 125ms/step - loss: 0.5731 - accuracy: 0.8333 - val_loss: 0.6158 - val_accuracy: 1.0000
Epoch 207/2000
1/1 [==============================] - 0s 101ms/step - loss: 0.5725 - accuracy: 0.8333 - val_loss: 0.6140 - val_accuracy: 1.0000
Epoch 208/2000
1/1 [==============================] - 0s 146ms/step - loss: 0.5718 - accuracy: 0.8333 - val_loss: 0.6123 - val_accuracy: 1.0000
Epoch 209/2000
1/1 [==============================] - 0s 218ms/step - loss: 0.5711 - accuracy: 0.8333 - val_loss: 0.6106 - val_accuracy: 1.0000
Epoch 210/2000
1/1 [==============================] - 0s 174ms/step - loss: 0.5704 - accuracy: 0.8333 - val_loss: 0.6088 - val_accuracy: 1.0000```
tensorflow machine-learning deep-learning neural-network tf.keras
1个回答
0
投票

代码不足以在我这边复制,但是,可能有两个常见原因导致损失减少并且在某一点后准确性没有提高:

  1. 存在过拟合。 分别在训练数据和测试数据上训练模型,并检查它在训练数据上是否表现良好,但在测试数据上表现不佳。

您可以快速使用此代码查看您的数据

#combining test and train data
df_combine = pd.concat([train, test], axis=0, ignore_index=True)
#dropping ‘target’ column as it is not present in the test
df_combine = df_combine.drop(‘target’, axis =1)
y = df_combine['is_train'].values #labels
x = df_combine.drop('is_train', axis=1).values #covariates or our independent variables
tst, trn = test.values, train.values

m = RandomForestClassifier(n_jobs=-1, max_depth=5, min_samples_leaf = 5)
predictions = np.zeros(y.shape) #creating an empty prediction array

如果训练数据和测试数据的性能具有可比性,那么这个问题可能不存在。检查完整的代码要点这里

相关文章也可以参考。

  1. 训练数据和测试数据的分布需要相同。

您可以单独绘制它们并比较图表来验证这一点。

如果不相同,则需要应用一些变换,例如平方/对数/指数。

© www.soinside.com 2019 - 2024. All rights reserved.