首先,我在处理 conv1
in_channels=1
时遇到问题,所以我将其更改为 8
。
为什么我必须申报8?
现在下一个错误发生在第一个 最大池化层 它说:
Given input size: (128x1x10372). Calculated output size: (128x0x2593). Output size is too small
我认为这与在第一个转换层中使用图像的总高度有关,并且您无法通过池化将其减小到大小 0(第二个值)。还是我误解了这部分文字? 我该如何解决这个问题?
还有其他改进吗?
# First convolutional layer
self.conv1 = nn.Conv2d(in_channels=8, out_channels=128, kernel_size=(128,6))
# Error that occured: Given groups=1, weight of size [128, 1, 128, 6], expected input[1, 8, 128, 10377] to have 1 channels, but got 8 channels instead
self.relu1 = nn.ReLU()
#self.bn1 = nn.BatchNorm2d(128)
init.kaiming_normal_(self.conv1.weight, a=0.1)
self.conv1.bias.data.zero_()
conv_layer1 = [self.conv1, self.relu1]
self.conv1 = nn.Sequential(*conv_layer1)
# Second Layer: 1. Max Pooling Layer
self.pool1 = nn.MaxPool2d(kernel_size=4)
# Error that occurs: Given input size: (128x1x10372). Calculated output size: (128x0x2593). Output size is too small
# Third Layer: 2. Convolution Block
self.conv2 = nn.Conv2d(in_channels=128, out_channels=128, kernel_size=(128,6))
self.relu2 = nn.ReLU()
#self.bn2 = nn.BatchNorm2d(128)
init.kaiming_normal_(self.conv2.weight, a=0.1)
self.conv2.bias.data.zero_()
conv_layer2 = [self.conv2, self.relu2]
self.conv2 = nn.Sequential(*conv_layer2)
# Fourth layer: 2. Max Pooling layer
self.pool2 = nn.MaxPool2d(kernel_size=5)
#conv_layers += self.pool2
# Hidden full connected layer
self.fc1 = nn.Linear(400, 400)
# Output layer
# With 50 outputs for 50 latent factors
self.fc2 = nn.Linear(128, 50)
想发表评论但我不能,所以我就在这里说一下。这不是一个答案,但要注意代码片段看起来像 pytorch,而不是 tensorflow。切换标签有助于吸引正确答案。