使用 VGG16 MNIST 数字进行迁移学习

问题描述 投票:0回答:1

我正在尝试对 MNIST 数字进行迁移学习。我有兴趣获取 logits 并将其用于基于梯度的攻击。但由于某种原因,即使我的计算机是启用了 GPU 的 Apple m2max 计算机,内核仍然会死机。我也尝试使用 GPU 进行 colab,但遇到同样的问题。该数据集不太好学,我正在重用 imagenet 权重。我该如何解决这个问题?

class VGG16TransferLearning(tf.keras.Model):
  def __init__(self, base_model, models):
    super(VGG16TransferLearning, self).__init__()
    #base model
    self.base_model = base_model

   # other layers
   self.flatten = tf.keras.layers.Flatten()
   self.dense1 = tf.keras.layers.Dense(512, activation='relu')
   self.dense2 = tf.keras.layers.Dense(512, activation='relu')
   self.dense3 = tf.keras.layers.Dense(10)
   self.layers_list = [self.flatten, self.dense1, self.dense2, self.dense3]
  
  #instantiate the base model with other layers
  self.model = models.Sequential(
    [self.base_model, *self.layers_list]
   )

def call(self, *args, **kwargs):
  activation_list = []
  out = args[0]
  
  for layer in self.model.layers:
    out = layer(out)
    activation_list.append(out)
  if kwargs['training']:
   return out
  else:
   prob = tf.nn.softmax(out)
   return out, prob

这是上面类的实例化:

base_model = VGG16(weights="imagenet", include_top=False, input_shape=x_train[0].shape)

base_model.trainable = False

我的输入形状是(75,75,3)

这是编译和拟合方法

from tensorflow.keras import layers, models

模型 = VGG16TransferLearning(base_model, models)

model.compile(loss=tf.keras.losses.SparseCategoricalCrossentropy(),
          optimizer=tf.keras.optimizers.legacy.Adam(),
          metrics=['accuracy'])

model.fit(x_train, y_train, epochs=10, validation_data=(x_test, y_test))

这是我每次调用 fit 方法时遇到的错误:

Kernel Restarting
The kernel for Untitled.ipynb appears to have died. It will restart automatically
python tensorflow machine-learning keras deep-learning
1个回答
0
投票

错误来自我的计算机配置。我猜想,即使列表物理设备的计算结果为 1,tensorflow 也看不到我的 mac 的 GPU。但现在问题已经解决,一切正常。

© www.soinside.com 2019 - 2024. All rights reserved.