PyTorch 模型对层列表抛出错误。

问题描述 投票:0回答:1

我设计了以下带有2个conv2d层的火炬模型。它的工作没有任何错误。

import torch.nn as nn
from torchsummary import summary

class mini_unet(nn.Module):
    def __init__(self):
        super(mini_unet, self).__init__()
        self.c1 = nn.Conv2d(1, 1, 3, padding = 1)
        self.r1 = nn.ReLU()
        self.c2 = nn.Conv2d(1, 1, 3, padding = 1)
        self.r2 = nn.ReLU()

    def forward(self, x):
        x = self.c1(x)
        x = self.r1(x)
        x = self.c2(x)
        x = self.r2(x)
        return x

a = mini_unet().cuda()

print(a)

但是,假设我有太多的层,我不想在forward函数中明确地写下每一个层。所以,我使用了一个列表来自动化它,像下面这样。

import torch.nn as nn
from torchsummary import summary

class mini_unet2(nn.Module):
    def __init__(self):
        super(mini_unet2, self).__init__()
        self.layers = nn.ModuleList([nn.Conv2d(1, 1, 3, padding = 1),
        nn.ReLU(),
        nn.Conv2d(1, 1, 3, padding = 1),
        nn.ReLU()])

    def forward(self, x):
        for l in self.layers:
            x = l(x)
        return x

a2 = mini_unet2().cuda()
print(a2)
summary(a2, (1,4,4))

这给了我下面的错误,这很奇怪,我已经用了cuda(),为什么它不工作?

RuntimeError                              Traceback (most recent call last)
<ipython-input-36-1d71e75b96e0> in <module>
     17 a2 = mini_unet2().cuda()
     18 print(a2)
---> 19 summary(a2, (1,4,4))

~/anaconda3/envs/torch/lib/python3.6/site-packages/torchsummary/torchsummary.py in summary(model, input_size, batch_size, device)
     70     # make a forward pass
     71     # print(x.shape)
---> 72     model(*x)
     73 
     74     # remove these hooks

~/anaconda3/envs/torch/lib/python3.6/site-packages/torch/nn/modules/module.py in __call__(self, *input, **kwargs)
    487             result = self._slow_forward(*input, **kwargs)
    488         else:
--> 489             result = self.forward(*input, **kwargs)
    490         for hook in self._forward_hooks.values():
    491             hook_result = hook(self, input, result)

<ipython-input-36-1d71e75b96e0> in forward(self, x)
     12     def forward(self, x):
     13         for l in self.layers:
---> 14             x = l(x)
     15         return x
     16 

~/anaconda3/envs/torch/lib/python3.6/site-packages/torch/nn/modules/module.py in __call__(self, *input, **kwargs)
    487             result = self._slow_forward(*input, **kwargs)
    488         else:
--> 489             result = self.forward(*input, **kwargs)
    490         for hook in self._forward_hooks.values():
    491             hook_result = hook(self, input, result)

~/anaconda3/envs/torch/lib/python3.6/site-packages/torch/nn/modules/conv.py in forward(self, input)
    318     def forward(self, input):
    319         return F.conv2d(input, self.weight, self.bias, self.stride,
--> 320                         self.padding, self.dilation, self.groups)
    321 
    322 

RuntimeError: Input type (torch.cuda.FloatTensor) and weight type (torch.FloatTensor) should be the same
python-3.x pytorch torch
1个回答
1
投票

这个错误也许有点违反直觉,但错误源于你对层使用了python列表。

从文档中看,你需要使用 torch.nn.ModuleList 来包含子模块,而不是 python 列表。

所以,只要把 listnn.Modulelist(list) 将解决这个错误。

import torch.nn as nn
from torchsummary import summary

class mini_unet2(nn.Module):
    def __init__(self):
        super(mini_unet2, self).__init__()
        self.layers = nn.ModuleList([nn.Conv2d(1, 1, 3, padding = 1),
        nn.ReLU(),
        nn.Conv2d(1, 1, 3, padding = 1),
        nn.ReLU()])

    def forward(self, x):
        for l in self.layers:
            x = l(x)
        return x

a2 = mini_unet2().cuda()
print(a2)
summary(a2, (1,4,4))
© www.soinside.com 2019 - 2024. All rights reserved.