运行时错误:需要 3D(未批处理)或 4D(批处理)输入到 conv2d,但得到的输入大小为:[64, 2]

问题描述 投票:0回答:2

我正在尝试使用 PyTorch 创建一个自定义 CNN 模型来对 RGB 图像进行二进制图像分类,但我不断收到运行时错误,指出我的原始输入形状 [64,3,128,128] 正在输出为 [64,2]。我已经尝试修复它两天了,但我仍然不知道代码出了什么问题。

这是网络代码:

class MyCNN(nn.Module):
  def __init__(self):
    super(MyCNN, self).__init__()
    self.network = nn.Sequential(
        nn.Conv2d(in_channels=3, out_channels=32, kernel_size=3),
        nn.ReLU(),
        nn.MaxPool2d(kernel_size=2),

        nn.Conv2d(32, 64, 3),
        nn.ReLU(),
        nn.MaxPool2d(2),

        nn.Conv2d(64, 128, 3),
        nn.ReLU(),
        nn.MaxPool2d(2),

        nn.Flatten(),
        nn.Linear(in_features=25088, out_features=2048),
        nn.ReLU(),
        nn.Linear(2048, 1024),
        nn.ReLU(),
        nn.Linear(1024, 2),
    )

  def forward(self, x):
    return self.network(x)

这里被称为:

for epoch in range(num_epochs):
    for images, labels in data_loader:  
        images, labels = images.to(device), labels.to(device)

        optimizer.zero_grad()

        # Forward pass
        outputs = model(images)
        loss = criterion(outputs, labels)
        
        # Backward and optimize
        loss.backward()
        optimizer.step()

    print('Epoch [{}/{}], Loss: {:.4f}'.format(epoch+1, num_epochs, loss.item()))

这是堆栈跟踪:

---------------------------------------------------------------------------
RuntimeError                              Traceback (most recent call last)
<ipython-input-30-fb9ee290e1d6> in <module>()
      7 
      8         # Forward pass
----> 9         outputs = model(images)
     10         loss = criterion(outputs, labels)
     11 

6 frames
/usr/local/lib/python3.7/dist-packages/torch/nn/modules/module.py in _call_impl(self, *input, **kwargs)
   1128         if not (self._backward_hooks or self._forward_hooks or self._forward_pre_hooks or _global_backward_hooks
   1129                 or _global_forward_hooks or _global_forward_pre_hooks):
-> 1130             return forward_call(*input, **kwargs)
   1131         # Do not call functions when jit is used
   1132         full_backward_hooks, non_full_backward_hooks = [], []

<ipython-input-29-09c58015e865> in forward(self, x)
     27         x = layer(x)
     28         print(x.shape)
---> 29     return self.network(x)
     30 
     31 model = MyCNN()

/usr/local/lib/python3.7/dist-packages/torch/nn/modules/module.py in _call_impl(self, *input, **kwargs)
   1128         if not (self._backward_hooks or self._forward_hooks or self._forward_pre_hooks or _global_backward_hooks
   1129                 or _global_forward_hooks or _global_forward_pre_hooks):
-> 1130             return forward_call(*input, **kwargs)
   1131         # Do not call functions when jit is used
   1132         full_backward_hooks, non_full_backward_hooks = [], []

/usr/local/lib/python3.7/dist-packages/torch/nn/modules/container.py in forward(self, input)
    137     def forward(self, input):
    138         for module in self:
--> 139             input = module(input)
    140         return input
    141 

/usr/local/lib/python3.7/dist-packages/torch/nn/modules/module.py in _call_impl(self, *input, **kwargs)
   1128         if not (self._backward_hooks or self._forward_hooks or self._forward_pre_hooks or _global_backward_hooks
   1129                 or _global_forward_hooks or _global_forward_pre_hooks):
-> 1130             return forward_call(*input, **kwargs)
   1131         # Do not call functions when jit is used
   1132         full_backward_hooks, non_full_backward_hooks = [], []

/usr/local/lib/python3.7/dist-packages/torch/nn/modules/conv.py in forward(self, input)
    455 
    456     def forward(self, input: Tensor) -> Tensor:
--> 457         return self._conv_forward(input, self.weight, self.bias)
    458 
    459 class Conv3d(_ConvNd):

/usr/local/lib/python3.7/dist-packages/torch/nn/modules/conv.py in _conv_forward(self, input, weight, bias)
    452                             _pair(0), self.dilation, self.groups)
    453         return F.conv2d(input, weight, bias, self.stride,
--> 454                         self.padding, self.dilation, self.groups)
    455 
    456     def forward(self, input: Tensor) -> Tensor:

RuntimeError: Expected 3D (unbatched) or 4D (batched) input to conv2d, but got input of size: [64, 2]

我真的很感谢您的帮助。如果解决方案很简单,但我不容易看到它,我深表歉意。干杯。

python machine-learning deep-learning pytorch conv-neural-network
2个回答
0
投票

数据似乎发生了变化,因为图像的大小是(64,3,512,512),标签是(64,2)。如果形状合适,效果就很好。这是我的代码。

代码:

import torch
import torch.nn as nn
import torch.optim as optim

class MyCNN(nn.Module):
  def __init__(self):
    super(MyCNN, self).__init__()
    self.network = nn.Sequential(
        nn.Conv2d(in_channels=3, out_channels=32, kernel_size=3),
        nn.ReLU(),
        nn.MaxPool2d(kernel_size=2),

        nn.Conv2d(32, 64, 3),
        nn.ReLU(),
        nn.MaxPool2d(2),

        nn.Conv2d(64, 128, 3),
        nn.ReLU(),
        nn.MaxPool2d(2),

        nn.Flatten(),
        nn.Linear(in_features=25088, out_features=2048),
        nn.ReLU(),
        nn.Linear(2048, 1024),
        nn.ReLU(),
        nn.Linear(1024, 2),
    )

  def forward(self, x):
    return self.network(x)

model = MyCNN()

criterion = nn.CrossEntropyLoss()
optimizer = optim.SGD(model.parameters(), lr = 0.001)

optimizer.zero_grad()

# Forward pass
images = torch.randn(64, 3, 128, 128)
labels = torch.randn(64, 2)
outputs = model(images)
loss = criterion(outputs, labels)
        
# Backward and optimize
loss.backward()
optimizer.step()

我建议更改此行

for images, labels in data_loader:  
        images, labels = images.to(device), labels.to(device)

到此

for labels, images in data_loader:  
        images, labels = images.to(device), labels.to(device)

0
投票

nn.Linear(in_features = 25088,out_features = 2048),

我们如何在我的模型中得到这个数字 25088 它给出了错误,我无法理解/。

© www.soinside.com 2019 - 2024. All rights reserved.