为什么我的keras模型总是预测相同的结果?

问题描述 投票:0回答:1

我之前已经意识到这个问题,但这些解决方案似乎都与我的问题无关。

我正在尝试使用逻辑回归来实现基本的二进制分类算法,以识别图像是猫还是狗。

我相信我正在正确地构建数据,我在初始致密层之前添加了一个展平层,我认为它正在接受正确的形状,然后我通过两个更密集的层运行它,最后一层只有2个输出(就像我一样)理解它,是这样的二元分类应该是这样的)。

请查看我的代码,并建议我可以做得更好:

1.)使预测输出变化(不总是选择一个或另一个) 2.)使我的准确性和损失在第二纪元后变化。

我尝试过: - 改变密集层的数量及其参数 - 更改数据集的大小(因此处理文件时的计数变量) - 改变时代的数量 - 将类型模型从sgd更改为adam

数据集初始化

import numpy as np
import cv2
import os
import matplotlib.pyplot as plt
from sklearn.model_selection import train_test_split
import random
import keras

dataDir = '/content/gdrive/My Drive/AI'
categories = ['dog', 'cat']

x, y = [], []

imgSize = 100

for cat in categories:
  folderPath = os.path.join(dataDir, cat) # path to the respective folders
  classNum = categories.index(cat)        # sets classification number (0 = dog, 1 = cat)
  count = 0                               # used for limiting the number of images to test
  for file in os.listdir(folderPath):
    count = count + 1                     
    try:
      # open image and convert to grayscale
      img = cv2.imread(os.path.join(folderPath, file), cv2.IMREAD_GRAYSCALE)

      # resize to a square of predefined dimensions
      newImg = cv2.resize(img, (imgSize, imgSize))

      # add images to x and labels to y
      x.append(newImg)
      y.append(classNum)
      if count >= 100:
        break;

    # some images may be broken
    except Exception as e:
      pass

# y array to categorical
y = keras.utils.to_categorical(y, num_classes=2)

# shuffle data to increase training
random.shuffle(x)
random.shuffle(y)

x = np.array(x).reshape(-1, imgSize, imgSize, 1)
y = np.array(y)

# split data into default sized groups (75% train, 25% test)
xTrain, xTest, yTrain, yTest = train_test_split(x, y, test_size=0.25)

# display bar chart
objects = ('xTrain', 'xTest', 'yTrain', 'yTest')
y_pos = np.arange(len(objects))
maxItems = int((len(x) / 2 ) + 1)
arrays = [len(xTrain), len(xTest), len(yTrain), len(yTest)]

plt.bar(y_pos, arrays, align='center')
plt.xticks(y_pos, objects)
plt.ylabel('# of items')
plt.title('Items in Arrays')

plt.show()

模型设置

from keras.layers import Dense, Flatten
from keras.models import Sequential

shape = xTest.shape
model = Sequential([Flatten(),
                   Dense(100, activation = 'relu', input_shape = shape),
                   Dense(50, activation = 'relu'),
                   Dense(2, activation = 'softmax')])

model.compile(loss = keras.losses.binary_crossentropy,
             optimizer = keras.optimizers.sgd(),
             metrics = ['accuracy'])

model.fit(xTrain, yTrain,
         epochs=3,
         verbose=1,
         validation_data=(xTest, yTest))

model.summary()

哪个输出:

Train on 150 samples, validate on 50 samples
Epoch 1/3
150/150 [==============================] - 1s 6ms/step - loss: 7.3177 - acc: 0.5400 - val_loss: 1.9236 - val_acc: 0.8800
Epoch 2/3
150/150 [==============================] - 0s 424us/step - loss: 3.4198 - acc: 0.7867 - val_loss: 1.9236 - val_acc: 0.8800
Epoch 3/3
150/150 [==============================] - 0s 430us/step - loss: 3.4198 - acc: 0.7867 - val_loss: 1.9236 - val_acc: 0.8800
_________________________________________________________________
Layer (type)                 Output Shape              Param #   
=================================================================
flatten_13 (Flatten)         (None, 10000)             0         
_________________________________________________________________
dense_45 (Dense)             (None, 100)               1000100   
_________________________________________________________________
dense_46 (Dense)             (None, 50)                5050      
_________________________________________________________________
dense_47 (Dense)             (None, 2)                 102       
=================================================================
Total params: 1,005,252
Trainable params: 1,005,252
Non-trainable params: 0

预测

y_pred = model.predict(xTest)

for y in y_pred:
  print(y)

哪个输出:

[1. 0.]
[1. 0.]
[1. 0.]
.
.
.
[1. 0.]
python tensorflow keras artificial-intelligence
1个回答
1
投票

有几种方法可以给这只猫上皮...哈哈双关语。我不知道你的工作是否正常。因此,假设您的数据和标签是正确的,那么我认为这是您的数据收集和模型构建问题。

首先,我认为你没有足够的数据。大多数这些二进制分类模型都建立在> 1000张图片上。你的工作要少得多。其次,你只做了3个时代,这还不够。对于你需要的图片数量,我建议至少50个时代。但这是确定正确数字的试错,如果你过度拟合。

这就是我用来构建二进制分类模型的方法。

from sklearn.preprocessing import LabelEncoder
from sklearn.model_selection import train_test_split
from keras.models import Sequential
from keras.layers import Activation
from keras.optimizers import SGD
from keras.layers import Dense
from keras.utils import np_utils
import numpy as np
import cv2


data = []
labels = []
imageSize = 32
# Do whatever you gotta do to create a folder of flatten/resized images
# and another labels list with indexes that match the index of pitcure
for image in folder:
    imagePath = 'path/to/image/'
    imageLabel = 'whatever_label'
    image = cv2.imread(imagePath)
    features = cv2.resize(image, imageSize).flatten(image)
    data.append(features)
    labels.append(imageLabel)

# Encode the labels
labelEncoder = LabelEncoder()
labels = labelEncoder.fit_transforma(labels)

# Scale the image to [0, 1]
data = np.array(data) / 255.0
# Generate labels as [0, 1] instead of ['dog', 'cat']
labels = np_utils.to_categorical(labels, 2)

# Split data
(trainData, testData, trainLabels, testLabels) = train_test_split(data, labels, test_size = 0.25, random_state = 42)

# Construct Model
model = Sequential()
model.add(Dense(768, input_dim = imageSize * imageSize * 3, init = 'uniform', activation = 'relu'))
model.add(Dense(384, activation = 'relu', kernel_initializer = 'uniform'))
model.add(Dense(2))
model.add(Activation('softmax'))

# Compile
sgd = SGD(lr=0.01)
model.compile(loss = 'binary_crossentropy', optimizer = sgd, metrics = ['accuracy'])
model.fit(trainData, trainLabels, epochs = 50, batch_size = 128, verbose = 1)

# Determine Accuracy and loss
(loss, accuracy) = model.evaluate(testData, testLabels, batch_size = 128, verbose = 1)
print('[INFO] loss={:.4f}, accuracy: {:.4f}%'.format(loss, accuracy * 100))

希望有所帮助!

© www.soinside.com 2019 - 2024. All rights reserved.