为什么训练损失不断增加?

问题描述 投票:0回答:0

以下文字为译者翻译,如有语法错误,敬请谅解。请多多包涵,也请无视代码中的中文注释。 我是深度学习的新手。刚学完多层感知器,刚开始学CNN,还没学完。这两天找了一个第三方的图像分类数据集(因为后面论文要用到所以提前搜索了一下),实现了一个简单的CNN网络。代码虽然能跑通,但是有一个严重的问题,就是从训练开始loss就增加了,最后测试的准确率虽然比较低但也不算低的离谱。具体数据集和代码如下: 该数据集为Lung Nodule Malignancy识别数据集,二分类数据集,包含6691条数据,大小为[1, 64, 64]。我按照8:2分成训练集和测试集。 代码如下:

# 导入所需的库
import torch
import torch.nn as nn
import torch.optim as optim
import torchvision
import torchvision.transforms as transforms
import h5py
from torch.utils.data import Dataset, DataLoader, random_split

# 定义超参数
num_epochs = 5
batch_size = 100
learning_rate = 0.0001

# 使用GPU
device = torch.device("cuda:0" if torch.cuda.is_available() else "cpu")
# 加载MNIST数据集并进行预处理
# 定义Dataset类
class HDF5Dataset(Dataset):
    def __init__(self, file_path, transform=None):
        # file_path: hdf5文件的路径
        # transform: 可选的图像变换操作
        self.file_path = file_path
        self.transform = transform
        # 打开hdf5文件(只读模式)
        self.hdf5_file = h5py.File(file_path, "r")
        # 获取图像和标签的数据集对象(注意修改为实际名称)
        self.images = self.hdf5_file["ct_slices"]
        self.labels = self.hdf5_file["new_slice_class"]

    def __len__(self):
        # 返回数据集的大小
        return len(self.images)

    def __getitem__(self, index):
        # 根据索引返回一张图片及其标签
        image = self.images[index] # 从hdf5文件中读取图像数据
        label = self.labels[index] # 从hdf5文件中读取标签数据

        if self.transform:
            image = self.transform(image)

        return image, label
    
# 定义图像变换操作
transform = transforms.Compose([
    transforms.ToTensor(), # 将图片转换为张量格式
])

# 创建数据集对象
dataset = HDF5Dataset(file_path="all_patches.hdf5", transform=transform)

# 定义训练集和测试集的大小(8:2)
train_size = int(0.8 * len(dataset))
test_size = len(dataset) - train_size

# 随机分割数据集为训练集和测试集
train_dataset, test_dataset = random_split(dataset, [train_size, test_size])

# 创建训练集和测试集的数据加载器对象
train_loader = DataLoader(train_dataset, batch_size=32, shuffle=True, drop_last=True)
test_loader = DataLoader(test_dataset, batch_size=32, drop_last=True)

# 使用for循环遍历训练集和测试集
print("Training set:")
print(f"Train dataset size: {len(train_dataset)}")
for images, labels in train_loader:
    print(images.shape) 
    print(labels.shape)
    # print(images) 
    # print(labels)
    break
print("------------------------------------------------")

print("Test set:")
print(f"Test dataset size: {len(test_dataset)}")
for images, labels in test_loader:
    print(images.shape) 
    print(labels.shape)
    # print(images) 
    # print(labels)
    break
print("------------------------------------------------")
# 定义网络
class CNN(nn.Module):
    def __init__(self):
        super(CNN, self).__init__()
        # 定义卷积层,输入通道为1,输出通道为16,卷积核大小为5,步长为1,填充为1
        self.conv1 = nn.Conv2d(1, 16, 3, 1, 1)
        # 定义激活函数,使用ReLU函数
        self.relu1 = nn.ReLU()
        # 定义池化层,使用最大池化,池化核大小为2,步长为2
        self.pool1 = nn.MaxPool2d(2, 2)
        # 定义卷积层,输入通道为16,输出通道为32,卷积核大小为5,步长为1,填充为1
        self.conv2 = nn.Conv2d(16, 32, 3, 1, 1)
        # 定义激活函数,使用ReLU函数
        self.relu2 = nn.ReLU()
        # 定义池化层,使用最大池化,池化核大小为2,步长为2
        self.pool2 = nn.MaxPool2d(2, 2)
        # 定义全连接层,输入特征数为32*16*16
        self.fc1 = nn.Linear(32*16*16, 1024)
        # 定义激活函数,使用ReLU函数
        self.relu3 = nn.ReLU()
        # 定义全连接层,输入特征数为1024,输出为128
        self.fc2 = nn.Linear(1024, 128)
        # 定义激活函数,使用ReLU函数
        self.relu4 = nn.ReLU()
        # 定义全连接层,128,输出为128,输出特征数为2
        self.fc3 = nn.Linear(128, 2)
        # 定义softmax操作
        self.softmax = nn.Softmax(dim=1)

    def forward(self, x):
        # 前向传播过程
        #包括
        x = self.conv1(x)
        x = self.relu1(x)
        x = self.pool1(x)
        x = self.conv2(x)
        x = self.relu2(x)
        x = self.pool2(x)
        x = x.view(-1, 32 * 16 * 16)
        x = self.fc1(x)
        x = self.relu3(x)
        x = self.fc2(x)
        x = self.relu4(x)
        x = self.fc3(x)
        x = self.softmax(x)
        return x
# 创建一个CNN对象,并将其移动到GPU上
net = CNN().to(device)

# 创建损失函数和优化器,并指定要优化的参数和学习率
criterion = nn.CrossEntropyLoss()
optimizer = optim.Adam(net.parameters(), lr=learning_rate)
# 训练CNN网络
for epoch in range(num_epochs):
    running_loss = 0.0
    for i, (data, label) in enumerate(train_loader, 0):
        # 将数据和标签移动到GPU上
        data, label = data.to(device), label.to(device, dtype=torch.int64)

        # 梯度清零
        optimizer.zero_grad()

        # 前向传播、计算损失、反向传播和优化
        output = net(data)
        loss = criterion(output, label)
        loss.backward()
        optimizer.step()

        # 计算损失和准确率
        running_loss += loss.item()
        if (i+1) % 10 == 0:    # 每10个batch输出一次
            current_running_loss = running_loss / 10
            print(f"Epoch {epoch + 1} Batch {i+1}, Loss: {current_running_loss:.4f}")

print('Finished Training')
# 在测试集上评估CNN网络的性能
correct = 0
total = 0
with torch.no_grad():
    for data, label in test_loader:
        # 将输入和标签移到GPU上
        data, label = data.to(device), label.to(device)

        # 前向传播、计算预测结果和准确率
        outputs = net(data)
        _, predicted = torch.max(outputs.data, 1)
        total += label.size(0)
        correct += (predicted == label).sum().item()
    
    print('Accuracy of the network on the 10000 test images: {} %'.format(100 * correct / total))

模型训练部分的输出如下:

Epoch 1 Batch 10, Loss: 0.6785
Epoch 1 Batch 20, Loss: 1.3699
Epoch 1 Batch 30, Loss: 2.0675
Epoch 1 Batch 40, Loss: 2.7401
Epoch 1 Batch 50, Loss: 3.4534
Epoch 1 Batch 60, Loss: 4.1604
Epoch 1 Batch 70, Loss: 4.8768
Epoch 1 Batch 80, Loss: 5.5619
Epoch 1 Batch 90, Loss: 6.2502
Epoch 1 Batch 100, Loss: 6.9353
Epoch 1 Batch 110, Loss: 7.6517
Epoch 1 Batch 120, Loss: 8.3900
Epoch 1 Batch 130, Loss: 9.0907
Epoch 1 Batch 140, Loss: 9.7634
Epoch 1 Batch 150, Loss: 10.4548
Epoch 1 Batch 160, Loss: 11.1305
Epoch 2 Batch 10, Loss: 0.6695
Epoch 2 Batch 20, Loss: 1.3921
Epoch 2 Batch 30, Loss: 2.0835
Epoch 2 Batch 40, Loss: 2.8093
Epoch 2 Batch 50, Loss: 3.5194
Epoch 2 Batch 60, Loss: 4.2608
Epoch 2 Batch 70, Loss: 4.9428
Epoch 2 Batch 80, Loss: 5.6123
Epoch 2 Batch 90, Loss: 6.3225
Epoch 2 Batch 100, Loss: 6.9889
Epoch 2 Batch 110, Loss: 7.6459
Epoch 2 Batch 120, Loss: 8.3060
Epoch 2 Batch 130, Loss: 8.9630
Epoch 2 Batch 140, Loss: 9.6919
Epoch 2 Batch 150, Loss: 10.3833
Epoch 2 Batch 160, Loss: 11.0528
Epoch 3 Batch 10, Loss: 0.6851
Epoch 3 Batch 20, Loss: 1.3828
Epoch 3 Batch 30, Loss: 2.0960
Epoch 3 Batch 40, Loss: 2.8062
Epoch 3 Batch 50, Loss: 3.5101
Epoch 3 Batch 60, Loss: 4.1358
Epoch 3 Batch 70, Loss: 4.8366
Epoch 3 Batch 80, Loss: 5.5280
Epoch 3 Batch 90, Loss: 6.1600
Epoch 3 Batch 100, Loss: 6.8732
Epoch 3 Batch 110, Loss: 7.6021
Epoch 3 Batch 120, Loss: 8.2810
Epoch 3 Batch 130, Loss: 8.9693
Epoch 3 Batch 140, Loss: 9.6669
Epoch 3 Batch 150, Loss: 10.3552
Epoch 3 Batch 160, Loss: 11.0528
Epoch 4 Batch 10, Loss: 0.6633
Epoch 4 Batch 20, Loss: 1.3921
Epoch 4 Batch 30, Loss: 2.1117
Epoch 4 Batch 40, Loss: 2.8155
Epoch 4 Batch 50, Loss: 3.5226
Epoch 4 Batch 60, Loss: 4.1858
Epoch 4 Batch 70, Loss: 4.8803
Epoch 4 Batch 80, Loss: 5.6092
Epoch 4 Batch 90, Loss: 6.2850
Epoch 4 Batch 100, Loss: 6.9732
Epoch 4 Batch 110, Loss: 7.7209
Epoch 4 Batch 120, Loss: 8.3904
Epoch 4 Batch 130, Loss: 9.0505
Epoch 4 Batch 140, Loss: 9.7357
Epoch 4 Batch 150, Loss: 10.4271
Epoch 4 Batch 160, Loss: 11.0809
Epoch 5 Batch 10, Loss: 0.6758
Epoch 5 Batch 20, Loss: 1.3546
Epoch 5 Batch 30, Loss: 2.0179
Epoch 5 Batch 40, Loss: 2.7468
Epoch 5 Batch 50, Loss: 3.4819
Epoch 5 Batch 60, Loss: 4.1858
Epoch 5 Batch 70, Loss: 4.8866
Epoch 5 Batch 80, Loss: 5.5967
Epoch 5 Batch 90, Loss: 6.2319
Epoch 5 Batch 100, Loss: 6.9045
Epoch 5 Batch 110, Loss: 7.6021
Epoch 5 Batch 120, Loss: 8.2841
Epoch 5 Batch 130, Loss: 8.9474
Epoch 5 Batch 140, Loss: 9.6763
Epoch 5 Batch 150, Loss: 10.3677
Epoch 5 Batch 160, Loss: 11.0528
Finished Training

我知道解决这个问题的一种方法是降低学习率,但它并没有提高。想请问前辈们能否指点一下这种情况下如何优化解决这个问题。 因为我是纯新手,所以肯定会犯一些可能非常愚蠢的错误。不吝赐教,也请不要太过责备我,非常感谢您的指导。

deep-learning conv-neural-network image-classification
© www.soinside.com 2019 - 2024. All rights reserved.