调试神经网络辍学问题,因为概率不在[0,1]内

问题描述 投票:0回答:1

您好,我尝试使用割炬将Droprate放到我的NN上,最后出现一个奇怪的错误。有人可以帮我吗?

因此,我在函数内编写了一个NN,以使其易于调用。功能如下:(我个人认为问题出在NN类之内,但是为了有一个可行的例子,我把所有内容都放了进来。)

def train_neural_network(data_train_X, data_train_Y, batch_size, learning_rate, graph = True, dropout = 0.0 ):
  input_size = len(data_test_X.columns)
  hidden_size = 200
  num_classes = 4
  num_epochs = 120
  batch_size = batch_size
  learning_rate = learning_rate

  #the class of NN
  class NeuralNet(nn.Module):
    def __init__(self, input_size, hidden_size, num_classes, p = dropout):
        super(NeuralNet, self).__init__()
        self.fc1 = nn.Linear(input_size, hidden_size)
        self.fc2 = nn.Linear(hidden_size, hidden_size)
        self.fc3 = nn.Linear(hidden_size, num_classes)

    def forward(self, x, p = dropout):
          out = F.relu(self.fc1(x))
          out = F.relu(self.fc2(out))
          out = nn.Dropout(out, p) #drop
          out = self.fc3(out)
          return out

  # prepare data
  X_train = torch.from_numpy(data_train_X.values).float()
  Y_train = torch.from_numpy(data_train_Y.values).float()

  # loading data 
  train = torch.utils.data.TensorDataset(X_train, Y_train)
  train_loader = torch.utils.data.DataLoader(train, batch_size=batch_size)

  net = NeuralNet(input_size, hidden_size, num_classes)

  # loss
  criterion = nn.CrossEntropyLoss()

  # optimiser
  optimiser = torch.optim.SGD(net.parameters(), lr=learning_rate)

  #proper training
  total_step = len(train_loader)
  loss_values = []

  for epoch in range(num_epochs+1):
    net.train()

    train_loss = 0.0

    for i, (predictors, results) in enumerate(train_loader, 0):
      # forward pass
      outputs = net(predictors)
      results = results.long() 
      results = results.squeeze_()
      loss = criterion(outputs, results)

      # backward and optimise
      optimiser.zero_grad()
      loss.backward()
      optimiser.step()

      # update loss
      train_loss += loss.item()

    loss_values.append(train_loss / batch_size )
  print('Finished Training')

  return net

以及当我调用函数时:

net = train_neural_network(data_train_X = data_train_X, data_train_Y = data_train_Y, batch_size = batch_size, learning_rate = learning_rate, dropout = 0.1)

错误如下:

net = train_neural_network(data_train_X = data_train_X, data_train_Y = data_train_Y, batch_size = batch_size, learning_rate = learning_rate, dropout = 0.1)

/usr/local/lib/python3.6/dist-packages/torch/nn/modules/dropout.py in __init__(self, p, inplace)
      8     def __init__(self, p=0.5, inplace=False):
      9         super(_DropoutNd, self).__init__()
---> 10         if p < 0 or p > 1:
     11             raise ValueError("dropout probability has to be between 0 and 1, "
     12                              "but got {}".format(p))

RuntimeError: bool value of Tensor with more than one value is ambiguous

您为什么认为有错误?在放置下降率之前一切正常。如果您知道如何为您提供其他要点在我的网络中实施偏见!例如在隐藏层上。我在网上找不到任何示例。

python debugging neural-network pytorch torch
1个回答
0
投票

您的张量具有多个值,并且Python不知道如何解释“大于”。例如[1, 4, 6, 8, 3] > 2?很难解释。您应该使用:

if torch.min(p) < 0 or torch.max(p) > 1:
    # do something

那会运行。如果概率小于零或大于1,则将执行该代码。

© www.soinside.com 2019 - 2024. All rights reserved.