PyTorch DeiT 模型无论如何都会预测一类

问题描述 投票:0回答:1

我们正在尝试在导入的 DeiT 蒸馏 patch16 384 预训练模型上微调自定义模型。

输出:

Cost at epoch 0 is 4.611058227040551
Cost at epoch 1 is 0.9889081553979353
test set accuracy
Checking accuracy
scores: tensor([[ 33.9686,  33.2787, -31.1509,  ..., -25.5279, -36.7728, -24.9331],
        [ 33.9695,  33.2792, -31.1509,  ..., -25.5264, -36.7719, -24.9356],
        [ 33.9690,  33.2784, -31.1496,  ..., -25.5270, -36.7717, -24.9326],
        ...,
        [ 33.9692,  33.2780, -31.1487,  ..., -25.5267, -36.7713, -24.9314],
        [ 33.9654,  33.2793, -31.1575,  ..., -25.5372, -36.7818, -24.9307],
        [ 33.9687,  33.2778, -31.1490,  ..., -25.5278, -36.7719, -24.9300]],
       device='cuda:0')
predictions: tensor([0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0,
        0, 0, 0, 0, 0, 0, 0, 0], device='cuda:0')
y: tensor([1, 0, 0, 1, 1, 0, 0, 0, 0, 1, 0, 0, 1, 1, 1, 1, 0, 0, 1, 0, 1, 0, 0, 0,
        1, 1, 0, 1, 0, 1, 1, 1], device='cuda:0') 

###  many more prints later
predictions: tensor([0, 0, 0, 0, 0, 0], device='cuda:0')
y: tensor([0, 1, 1, 0, 1, 1], device='cuda:0')
Got 80 / 198 with accuracy 40.40
Precision: 0.1632
Recall: 0.4040
F1-Score: 0.2325

该文件夹的结构为 KneeOsteoarthritisXray,包含子文件夹 train、test 和 val(忽略 val,因为我们只是希望它工作),每个子文件夹都有子文件夹 0 和 1(0 是健康的,1 是骨关节炎) 该模型仅预测 0,并返回等于数据集中 0 数量的准确度

from sklearn.metrics import precision_score, recall_score, f1_score
import os
import numpy as np
from PIL import Image
from torch.utils.data import Dataset
import torch
import torch.nn as nn
import torch.optim as optim
import torch.nn.functional as F
from torch.utils.data import DataLoader
#import torchvision.transforms as transforms
from torchvision import transforms
import torchvision
from transformers import DeiTForImageClassificationWithTeacher, DeiTImageProcessor

transform = transforms.Compose([
    transforms.Grayscale(num_output_channels=3),
    transforms.ToTensor(),
])

class myDataset(Dataset):
    def __init__(self, root_dir):
        self.root_dir = root_dir
        self.data = []

        for label in os.listdir(root_dir):
            label_dir = os.path.join(root_dir, label)
            if os.path.isdir(label_dir):
                for file in os.listdir(label_dir):
                    self.data.append((os.path.join(label_dir, file), int(label)))

    def __len__(self):
        return len(self.data)

    def __getitem__(self, idx):
        img_path, label = self.data[idx]
        image = Image.open(img_path)
        #print(f'image before normalization: {image}') #DEBUG
        image = transform(image)
        #print(f'image after normalization to 0-1{image}') #DEBUG
        image_np = np.array(image) 
        return image, label

device = torch.device('cuda' if torch.cuda.is_available() else 'cpu')

learning_rate = 0.01
batch_size = 32
num_epochs = 32

model_path = "/content/drive/MyDrive/datasets/PyTorchdeit-base-distilled-patch16-384/"
model = DeiTForImageClassificationWithTeacher.from_pretrained(model_path)
model.to(device)

train_dataset = myDataset(root_dir="/content/drive/MyDrive/datasets/KneeOsteoarthritisXray/train")
#val_dataset = myDataset(root_dir="/content/drive/MyDrive/datasets/KneeOsteoarthritisXray/val")
test_dataset = myDataset(root_dir="/content/drive/MyDrive/datasets/KneeOsteoarthritisXray/test")

train_loader = DataLoader(dataset=train_dataset, batch_size=batch_size, shuffle=True)
#val_loader = DataLoader(dataset=val_dataset, batch_size=batch_size, shuffle=True)
test_loader = DataLoader(dataset=test_dataset, batch_size=batch_size, shuffle=True)


criterion = nn.CrossEntropyLoss()
optimizer = optim.Adam(model.parameters(), lr=learning_rate)

for epoch in range(num_epochs):
    losses = []

    for batch_idx, (data, targets) in enumerate(train_loader):
        data = data.to(device=device)
        targets = targets.to(device=device)

        scores = model(data)['logits']
        loss = criterion(scores, targets)
        losses.append(loss.item())

        optimizer.zero_grad()
        loss.backward()

        optimizer.step()

    print(f'Cost at epoch {epoch} is {(sum(losses)/len(losses))}')
  

def check_accuracy(loader, model):
    print("Checking accuracy")
    num_correct = 0
    num_samples = 0
    all_labels = []
    all_preds = []

    model.eval()

    with torch.no_grad():
      for x, y in loader:
          x = x.to(device=device)
          y = y.to(device=device)

          scores = model(x)['logits']
          print(f'scores: {scores}')
          _, predictions = scores.max(1)
          print(f'predictions: {predictions}')
          print(f'y: {y}')

          all_labels.extend(y.cpu().numpy())
          all_preds.extend(predictions.cpu().numpy())

          num_correct += (predictions == y).sum() #.item()
          num_samples += predictions.size(0)

    print(f'Got {num_correct} / {num_samples} with accuracy {float(num_correct)/float(num_samples)*100:.2f}')

    precision = precision_score(all_labels, all_preds, average='weighted')
    recall = recall_score(all_labels, all_preds, average='weighted')
    f1 = f1_score(all_labels, all_preds, average='weighted')

    print(f'Precision: {precision:.4f}')
    print(f'Recall: {recall:.4f}')
    print(f'F1-Score: {f1:.4f}')

    model.train()

print('test set accuracy')
check_accuracy(test_loader, model)

我们不认为它过度拟合,因为我们尝试过数据集的不平衡和平衡版本,我们尝试过拟合一个小数据集,以及许多其他尝试。

我们检查了许多类似的投诉,但无法真正从他们的代码或解决方案中得到任何信息

python machine-learning pytorch dataset vision-transformer
1个回答
0
投票

正如您所说,您正在使用 DeiT 模型,并且训练像 Deit 这样的模型的学习率相对较高,这会导致模型收敛到次优解,这就是您的模型仅偏向于一类的原因。

© www.soinside.com 2019 - 2024. All rights reserved.