尝试在 128 x 128 imagenet 上运行 resnet,但出现“输入尺寸必须相等”错误,我该如何修复它?

问题描述 投票:0回答:1

我正在尝试按照教程

构建图像分类器

我成功完成了教程并在 CIFAR-10 数据集上获得了良好的准确性。

我正在尝试在 imagenet 数据集上运行分类器。

我对教程中的原始代码进行了一些修改,以使模型处理 imagenet 数据集中的图像,包括将所有图像调整为形状

(128, 128, 3)

下面所有的代码和回溯也都在这个笔记本

数据集

batch_size = 32  # Batch size
image_size = 128  # Image size of training data
workers = 2  # Number of parallel workers
num_classes = 1000  # Number of classes


def create_dataset(data_set, usage, resize, batch_size, workers):

    trans = []

    trans += [
        vision.Resize((128, 128)),
        vision.Rescale(1.0 / 255.0, 0.0),
        vision.HWC2CHW()
    ]

    target_trans = transforms.TypeCast(mstype.int32)

    data_set = data_set.map(operations=trans,
                            input_columns='image',
                            num_parallel_workers=workers)

    data_set = data_set.map(operations=target_trans,
                            input_columns='label',
                            num_parallel_workers=workers)

    data_set = data_set.batch(batch_size)
    return data_set

import mindspore
from mindspore.dataset import vision, transforms
import mindspore.dataset as ds

trainset = ds.ImageFolderDataset("./imagenet2012/train", decode=True)
testset = ds.ImageFolderDataset("./imagenet2012/val", decode=True)

dataset_train = create_dataset(trainset,
                               usage="train",
                               resize=image_size,
                               batch_size=batch_size,
                               workers=workers)

dataset_val = create_dataset(testset,
                                 usage="test",
                                 resize=image_size,
                                 batch_size=batch_size,
                                 workers=workers)

step_size_val = dataset_val.get_dataset_size()
step_size_train = dataset_train.get_dataset_size()

我知道我最好使用填充物或其他东西。然而,上面的代码并不完美,尽管它可以工作,我通过采样一些图像、渲染检查形状来测试数据集,一切都很好。

上面的代码成功生成了 2 个数据集,即训练集和测试集。

残差块基

以下代码定义了 ResidualBlockBase 类来实现构建块结构,与教程中的相同,我认为不需要修改即可应用于 imagenet 数据集。

from typing import Type, Union, List, Optional
import mindspore.nn as nn
from mindspore.common.initializer import Normal

# Initialize the parameters of the convolutional layer and BatchNorm layer
weight_init = Normal(mean=0, sigma=0.02)
gamma_init = Normal(mean=1, sigma=0.02)

class ResidualBlockBase(nn.Cell):
    expansion: int = 1  # The number of convolution kernels at the last layer is the same as that of convolution kernels at the first layer.

    def __init__(self, in_channel: int, out_channel: int,
                 stride: int = 1, norm: Optional[nn.Cell] = None,
                 down_sample: Optional[nn.Cell] = None) -> None:
        super(ResidualBlockBase, self).__init__()
        if not norm:
            self.norm = nn.BatchNorm2d(out_channel)
        else:
            self.norm = norm

        self.conv1 = nn.Conv2d(in_channel, out_channel,
                               kernel_size=3, stride=stride,
                               weight_init=weight_init)
        self.conv2 = nn.Conv2d(in_channel, out_channel,
                               kernel_size=3, weight_init=weight_init)
        self.relu = nn.ReLU()
        self.down_sample = down_sample

    def construct(self, x):
        """ResidualBlockBase construct."""
        identity = x  # shortcut

        out = self.conv1(x)  # First layer of the main body: 3 x 3 convolutional layer
        out = self.norm(out)
        out = self.relu(out)
        out = self.conv2(out)  # Second layer of the main body: 3 x 3 convolutional layer
        out = self.norm(out)

        if self.down_sample is not None:
            identity = self.down_sample(x)
        out += identity  # output the sum of the main body and the shortcuts
        out = self.relu(out)

        return out

残差块

下面的代码定义了ResidualBlock类来实现bottleneck结构。与教程中的相同,我认为它也不需要修改即可应用于 imagenet 数据集。

class ResidualBlock(nn.Cell):
    expansion = 4  # The number of convolution kernels at the last layer is four times that of convolution kernels at the first layer.

    def __init__(self, in_channel: int, out_channel: int,
                 stride: int = 1, down_sample: Optional[nn.Cell] = None) -> None:
        super(ResidualBlock, self).__init__()

        self.conv1 = nn.Conv2d(in_channel, out_channel,
                               kernel_size=1, weight_init=weight_init)
        self.norm1 = nn.BatchNorm2d(out_channel)
        self.conv2 = nn.Conv2d(out_channel, out_channel,
                               kernel_size=3, stride=stride,
                               weight_init=weight_init)
        self.norm2 = nn.BatchNorm2d(out_channel)
        self.conv3 = nn.Conv2d(out_channel, out_channel * self.expansion,
                               kernel_size=1, weight_init=weight_init)
        self.norm3 = nn.BatchNorm2d(out_channel * self.expansion)

        self.relu = nn.ReLU()
        self.down_sample = down_sample

    def construct(self, x):

        identity = x  # shortcut

        out = self.conv1(x)  # First layer of the main body: 1 x 1 convolutional layer
        out = self.norm1(out)
        out = self.relu(out)
        out = self.conv2(out)  # Second layer of the main body: 3 x 3 convolutional layer
        out = self.norm2(out)
        out = self.relu(out)
        out = self.conv3(out)  # Third layer of the main body: 1 x 1 convolutional layer
        out = self.norm3(out)

        if self.down_sample is not None:
            identity = self.down_sample(x)

        out += identity  # The output is the sum of the main body and the shortcut.
        out = self.relu(out)

        return out

制作层

以下示例定义 make_layer 来构建残差块。参数如下:

last_out_channel:前一个残差网络的输出通道数

block:残差网络类型。该值可以是 ResidualBlockBase 或 ResidualBlock。

channel:残差网络的输入通道数

block_nums:堆叠残差网络块的数量

stride:卷积运动的步幅

def make_layer(last_out_channel, block: Type[Union[ResidualBlockBase, ResidualBlock]],
               channel: int, block_nums: int, stride: int = 1):
    down_sample = None  # shortcuts


    if stride != 1 or last_out_channel != channel * block.expansion:

        down_sample = nn.SequentialCell([
            nn.Conv2d(last_out_channel, channel * block.expansion,
                      kernel_size=1, stride=stride, weight_init=weight_init),
            nn.BatchNorm2d(channel * block.expansion, gamma_init=gamma_init)
        ])

    layers = []
    layers.append(block(last_out_channel, channel, stride=stride, down_sample=down_sample))

    in_channel = channel * block.expansion
    # Stack residual networks.
    for _ in range(1, block_nums):

        layers.append(block(in_channel, channel))

    return nn.SequentialCell(layers)

ResNet 类

以下示例代码用于构建ResNet-50模型。您可以调用resnet50函数构建ResNet-50模型。 resnet50函数的参数如下:

num_classes:类的数量。默认值为 1000。

预训练:下载对应的训练模型,并将预训练模型中的参数加载到网络中。

from mindspore import load_checkpoint, load_param_into_net


class ResNet(nn.Cell):
    def __init__(self, block: Type[Union[ResidualBlockBase, ResidualBlock]],
                 layer_nums: List[int], num_classes: int, input_channel: int) -> None:
        super(ResNet, self).__init__()

        self.relu = nn.ReLU()
        # At the first convolutional layer, the number of the input channels is 3 (color image) and that of the output channels is 64.
        self.conv1 = nn.Conv2d(3, 64, kernel_size=7, stride=2, weight_init=weight_init)
        self.norm = nn.BatchNorm2d(64)
        # Maximum pooling layer, reducing the image size
        self.max_pool = nn.MaxPool2d(kernel_size=3, stride=2, pad_mode='same')
        # Define each residual network structure block
        self.layer1 = make_layer(64, block, 64, layer_nums[0])
        self.layer2 = make_layer(64 * block.expansion, block, 128, layer_nums[1], stride=2)
        self.layer3 = make_layer(128 * block.expansion, block, 256, layer_nums[2], stride=2)
        self.layer4 = make_layer(256 * block.expansion, block, 512, layer_nums[3], stride=2)
        # average pooling layer
        self.avg_pool = nn.AvgPool2d()
        # flattern layer
        self.flatten = nn.Flatten()
        # fully-connected layer
        self.fc = nn.Dense(in_channels=input_channel, out_channels=num_classes)

    def construct(self, x):

        x = self.conv1(x)
        x = self.norm(x)
        x = self.relu(x)
        x = self.max_pool(x)

        x = self.layer1(x)
        x = self.layer2(x)
        x = self.layer3(x)
        x = self.layer4(x)

        x = self.avg_pool(x)
        x = self.flatten(x)
        x = self.fc(x)

        return x

def _resnet(model_url: str, block: 类型[Union[ResidualBlockBase, ResidualBlock]], 层:List [int],num_classes:int,预训练:bool,pretrained_ckpt:str, 输入通道:整数): 模型 = ResNet(块、层、num_classes、input_channel)

if pretrained:
    # load pre-trained models
    download(url=model_url, path=pretrained_ckpt, replace=True)
    param_dict = load_checkpoint(pretrained_ckpt)
    load_param_into_net(model, param_dict)

return model


def resnet50(num_classes: int = 1000, pretrained: bool = False):
    "ResNet50 model"
    resnet50_url = "https://obs.dualstack.cn-north-4.myhuaweicloud.com/mindspore-website/notebook/models/application/resnet50_224_new.ckpt"
    resnet50_ckpt = "./LoadPretrainedModel/resnet50_224_new.ckpt"
    return _resnet(resnet50_url, ResidualBlock, [3, 4, 6, 3], num_classes,
                   pretrained, resnet50_ckpt, 2048)

我不确定上面代码中的最后一行是否需要修改才能应用于我的新数据。因为它似乎与我遇到的错误有关。

训练模型

我将

out_channels
设置为 1000,以应用于 imagenet 数据集

# Define the ResNet50 network.
network = resnet50(pretrained=True)

# Size of the input layer of the fully-connected layer
in_channel = network.fc.in_channels
fc = nn.Dense(in_channels=in_channel, out_channels=1000)
# Reset the fully-connected layer.
network.fc = fc

超参数

我认为以下超参数不需要修改,所以我保持不变。

# Set the learning rate
num_epochs = 5
lr = nn.cosine_decay_lr(min_lr=0.00001, max_lr=0.001, total_step=step_size_train * num_epochs,
                        step_per_epoch=step_size_train, decay_epoch=num_epochs)
# Define optimizer and loss function
opt = nn.Momentum(params=network.trainable_params(), learning_rate=lr, momentum=0.9)
loss_fn = nn.SoftmaxCrossEntropyWithLogits(sparse=True, reduction='mean')


def forward_fn(inputs, targets):
    logits = network(inputs)
    loss = loss_fn(logits, targets)
    return loss


grad_fn = ms.value_and_grad(forward_fn, None, opt.parameters)


def train_step(inputs, targets):
    loss, grads = grad_fn(inputs, targets)
    opt(grads)
    return loss

训练和评估功能

我认为以下代码不需要修改,所以我保持不变。

import os

# Creating Iterators
data_loader_train = dataset_train.create_tuple_iterator(num_epochs=num_epochs)
data_loader_val = dataset_val.create_tuple_iterator(num_epochs=num_epochs)

# Optimal model storage path
best_acc = 0
best_ckpt_dir = "./BestCheckpoint"
best_ckpt_path = "./BestCheckpoint/resnet50-best.ckpt"

if not os.path.exists(best_ckpt_dir):
    os.mkdir(best_ckpt_dir)

import mindspore.ops as ops


def train(data_loader, epoch):
    """Model taining"""
    losses = []
    network.set_train(True)

    for i, (images, labels) in enumerate(data_loader):
        loss = train_step(images, labels)
        if i % 100 == 0 or i == step_size_train - 1:
            print('Epoch: [%3d/%3d], Steps: [%3d/%3d], Train Loss: [%5.3f]' %
                  (epoch + 1, num_epochs, i + 1, step_size_train, loss))
        losses.append(loss)

    return sum(losses) / len(losses)


def evaluate(data_loader):
    """Model Evaluation"""
    network.set_train(False)

    correct_num = 0.0  # Number of correct predictions
    total_num = 0.0  # Total number of predictions

    for images, labels in data_loader:
        logits = network(images)
        pred = logits.argmax(axis=1)  # Prediction results
        correct = ops.equal(pred, labels).reshape((-1, ))
        correct_num += correct.sum().asnumpy()
        total_num += correct.shape[0]

    acc = correct_num / total_num  # Accuracy

    return acc

问题中的错误

以下代码抛出 ValueError

对于“MatMul”,输入维度必须相等,但得到“x1_col”:32768 和“x2_row”:2048。

print("Start Training Loop ...")

for epoch in range(num_epochs):
    curr_loss = train(data_loader_train, epoch)
    curr_acc = evaluate(data_loader_val)

    print("-" * 50)
    print("Epoch: [%3d/%3d], Average Train Loss: [%5.3f], Accuracy: [%5.3f]" % (
        epoch+1, num_epochs, curr_loss, curr_acc
    ))
    print("-" * 50)

    # Save the model that has achieved the highest prediction accuracy
    if curr_acc > best_acc:
        best_acc = curr_acc
        ms.save_checkpoint(network, best_ckpt_path)

print("=" * 80)
print(f"End of validation the best Accuracy is: {best_acc: 5.3f}, "
      f"save the best ckpt file in {best_ckpt_path}", flush=True)

我什至不知道从哪里开始检查我的代码,因为我没有对教程中的原始代码进行太多更改

任何有关提示或解决方案的建议将不胜感激。

这是回溯

回溯


ValueError                                Traceback (most recent call last)
Cell In[18], line 5
      2 print("Start Training Loop ...")
      4 for epoch in range(num_epochs):
----> 5     curr_loss = train(data_loader_train, epoch)
      6     curr_acc = evaluate(data_loader_val)
      8     print("-" * 50)

Cell In[17], line 10, in train(data_loader, epoch)
      7 network.set_train(True)
      9 for i, (images, labels) in enumerate(data_loader):
---> 10     loss = train_step(images, labels)
     11     if i % 100 == 0 or i == step_size_train - 1:
     12         print('Epoch: [%3d/%3d], Steps: [%3d/%3d], Train Loss: [%5.3f]' %
     13               (epoch + 1, num_epochs, i + 1, step_size_train, loss))

Cell In[15], line 20, in train_step(inputs, targets)
     19 def train_step(inputs, targets):
---> 20     loss, grads = grad_fn(inputs, targets)
     21     opt(grads)
     22     return loss

File ~/miniconda3/lib/python3.9/site-packages/mindspore/ops/composite/base.py:620, in _Grad.__call__.<locals>.after_grad(*args, **kwargs)
    619 def after_grad(*args, **kwargs):
--> 620     return grad_(fn_, weights)(*args, **kwargs)

File ~/miniconda3/lib/python3.9/site-packages/mindspore/common/api.py:106, in _wrap_func.<locals>.wrapper(*arg, **kwargs)
    104 @wraps(fn)
    105 def wrapper(*arg, **kwargs):
--> 106     results = fn(*arg, **kwargs)
    107     return _convert_python_data(results)

File ~/miniconda3/lib/python3.9/site-packages/mindspore/ops/composite/base.py:595, in _Grad.__call__.<locals>.after_grad(*args, **kwargs)
    593 @_wrap_func
    594 def after_grad(*args, **kwargs):
--> 595     res = self._pynative_forward_run(fn, grad_, weights, args, kwargs)
    596     _pynative_executor.grad(fn, grad_, weights, grad_position, *args, **kwargs)
    597     out = _pynative_executor()

File ~/miniconda3/lib/python3.9/site-packages/mindspore/ops/composite/base.py:645, in _Grad._pynative_forward_run(self, fn, grad, weights, args, kwargs)
    643 _pynative_executor.set_grad_flag(True)
    644 _pynative_executor.new_graph(fn, *args, **new_kwargs)
--> 645 outputs = fn(*args, **new_kwargs)
    646 _pynative_executor.end_graph(fn, outputs, *args, **new_kwargs)
    647 return outputs

Cell In[15], line 11, in forward_fn(inputs, targets)
     10 def forward_fn(inputs, targets):
---> 11     logits = network(inputs)
     12     loss = loss_fn(logits, targets)
     13     return loss

File ~/miniconda3/lib/python3.9/site-packages/mindspore/nn/cell.py:662, in Cell.__call__(self, *args, **kwargs)
    660 except Exception as err:
    661     _pynative_executor.clear_res()
--> 662     raise err
    664 if isinstance(output, Parameter):
    665     output = output.data

File ~/miniconda3/lib/python3.9/site-packages/mindspore/nn/cell.py:659, in Cell.__call__(self, *args, **kwargs)
    657     _pynative_executor.new_graph(self, *args, **kwargs)
    658     output = self._run_construct(args, kwargs)
--> 659     _pynative_executor.end_graph(self, output, *args, **kwargs)
    660 except Exception as err:
    661     _pynative_executor.clear_res()

File ~/miniconda3/lib/python3.9/site-packages/mindspore/common/api.py:1304, in _PyNativeExecutor.end_graph(self, obj, output, *args, **kwargs)
   1291 def end_graph(self, obj, output, *args, **kwargs):
   1292     """
   1293     Clean resources after building forward and backward graph.
   1294 
   (...)
   1302         None.
   1303     """
-> 1304     self._executor.end_graph(obj, output, *args, *(kwargs.values()))

ValueError: For 'MatMul' the input dimensions must be equal, but got 'x1_col': 32768 and 'x2_row': 2048.

----------------------------------------------------
- C++ Call Stack: (For framework developers)
----------------------------------------------------
mindspore/core/ops/mat_mul.cc:101 InferShape
python deep-learning
1个回答
0
投票

For 'MatMul' the input dimensions must be equal, but got 'x1_col': 32768 and 'x2_row': 2048.

如果您查看错误消息中的值,您可能会发现

32768 = 128x128x2
2048 = 32x32x2
。所以问题是模型仍然期望沿途的某个地方有原始大小的图像。这看起来可能是罪魁祸首:

def resnet50(num_classes: int = 1000, pretrained: bool = False):
    ...
    return _resnet(resnet50_url, ResidualBlock, [3, 4, 6, 3], num_classes, pretrained, resnet50_ckpt, 2048)
© www.soinside.com 2019 - 2024. All rights reserved.