LLaMA 3.1 使用 QLoRA 进行微调 - CUDA 内存不足错误

问题描述 投票:0回答:1

我正在尝试使用 QLoRA 技术,借助来自 Hugging Face 的心理健康对话数据集的 4 位 BitsandBytes 库来微调 LLaMA 3.1 80 亿个参数模型。然而,当我运行代码时,我遇到了

torch.cuda.OutOfMemoryError
。我尝试过使用多个 GPU 以及更高的 GPU 内存,但错误仍然存在。

这是我的代码:

import torch
from torch.utils.data import Dataset, DataLoader
from transformers import Trainer, AutoModelForCausalLM, AutoTokenizer
from datasets import Dataset, load_dataset
from peft import get_peft_model, LoraConfig, TaskType
import numpy as np
from transformers import BitsAndBytesConfig, TrainingArguments

# BitsAndBytes configuration which loads in 4-bit
bnb_config = BitsAndBytesConfig(
    load_in_4bit=True,
    bnb_4bit_quant_type="nf4",
    bnb_4bit_compute_dtype=torch.bfloat16,
    bnb_4bit_quant_storage=torch.bfloat16,
)

# Load model and tokenizer using huggingface
model = AutoModelForCausalLM.from_pretrained(
    "meta-llama/Meta-Llama-3.1-8B-Instruct",
    quantization_config=bnb_config,
    torch_dtype=torch.bfloat16,
)

tokenizer = AutoTokenizer.from_pretrained("meta-llama/Meta-Llama-3.1-8B-Instruct")

# Load mental health counseling dataset 
dataset = load_dataset("Amod/mental_health_counseling_conversations")

# Data preprocessing functions 
def generate_prompt(Context, Response):
    return f"""
    You are supposed to reply to the questions as a professional therapist

    Question: {Context}
    Answer: {Response}
    """

def format_for_llama(example):
    prompt = generate_prompt(example['Context'], example['Response'])
    return {
        "text": prompt.strip()
    }

formatted_dataset = dataset['train'].map(format_for_llama)

tokenizer.pad_token = tokenizer.eos_token

# Collate function for DataLoader
def collate_fn(examples):
    input_ids = torch.stack([example['input_ids'] for example in examples])
    attention_mask = torch.stack([example['attention_mask'] for example in examples])
    return {
        'input_ids': input_ids,
        'attention_mask': attention_mask
    }

train_dataloader = DataLoader(tokenized_dataset, collate_fn=collate_fn, batch_size=10)

# PEFT configuration (adding trainable adapters)
peft_config = LoraConfig(
    task_type=TaskType.CAUSAL_LM,
    inference_mode=False,
    r=16,
    lora_alpha=32,
    lora_dropout=0.1,
    target_modules=[
        "q_proj",
        "k_proj",
        "v_proj",
        "o_proj",
        "gate_proj",
        "up_proj",
        "down_proj"
    ]
)

model = get_peft_model(model, peft_config)
model.print_trainable_parameters()

# Training arguments hyperparameters
args = TrainingArguments(
    output_dir="./models",
    save_strategy="epoch",
    learning_rate=2e-5,
    per_device_train_batch_size=10,
    num_train_epochs=10,
    weight_decay=0.01,
    logging_dir='logs',
    logging_strategy="epoch",
    remove_unused_columns=False,
    eval_strategy="no",
    load_best_model_at_end=False,
)

# Trainer initialization and training
trainer = Trainer(
    model=model,
    args=args,
    train_dataset=tokenized_dataset,
    data_collator=collate_fn
)
trainer.train()

当我运行此代码时,出现以下错误:

OutOfMemoryError                          Traceback (most recent call last)
Cell In[42], line 7
      1 trainer = Trainer(
      2     model=model,
      3     args=args,
      4     train_dataset=tokenized_dataset,
      5     data_collator=collate_fn
      6 )
----> 7 trainer.train()

File /usr/local/lib/python3.10/dist-packages/transformers/trainer.py:1938, in Trainer.train(self, resume_from_checkpoint, trial, ignore_keys_for_eval, **kwargs)
   1936         hf_hub_utils.enable_progress_bars()
   1937 else:
-> 1938     return inner_training_loop(
   1939         args=args,
   1940         resume_from_checkpoint=resume_from_checkpoint,
   1941         trial=trial,
   1942         ignore_keys_for_eval=ignore_keys_for_eval,
   1943     )

File /usr/local/lib/python3.10/dist-packages/transformers/trainer.py:2279, in Trainer._inner_training_loop(self, batch_size, args, resume_from_checkpoint, trial, ignore_keys_for_eval)
   2276     self.control = self.callback_handler.on_step_begin(args, self.state, self.control)
   2278 with self.accelerator.accumulate(model):
-> 2279     tr_loss_step = self.training_step(model, inputs)
...
   1857 else:
-> 1858     ret = input.softmax(dim, dtype=dtype)
   1859 return ret

OutOfMemoryError: CUDA out of memory. Tried to allocate 1.25 GiB. GPU 0 has a total capacty of 47.54 GiB of which 1.05 GiB is free. Process 3704361 has 46.47 GiB memory in use. Of the allocated memory 45.37 GiB is allocated by PyTorch, and 808.03 MiB is reserved by PyTorch but unallocated. If reserved but unallocated memory is large try setting max_split_size_mb to avoid fragmentation.  See documentation for Memory Management and PYTORCH_CUDA_ALLOC_CONF

我尝试在 RunPod 上使用多个 GPU 实例,如此屏幕截图所示,但会导致相同的错误。

如何解决这个问题?

我尝试过的事情

  1. 使用多个GPU
  2. 增加GPU内存
  3. 调整批量大小

环境

  • Python版本:3.10
  • PyTorch 版本:(请注明)
  • CUDA版本:(请注明)
  • GPU:RunPod 上的多个实例
nlp artificial-intelligence large-language-model llama fine-tuning
1个回答
0
投票

您使用哪种类型的 GPU?

我会尝试的一些事情:

  • bnb_4bit_use_double_quant = True
    设置为
    BitsAndBytesConfig
  • r
    中的
    LoraConfig
    降低到8或4
  • 下方
    per_device_train_batch_size
    per_device_eval_batch_size
    ,例如到 4 或 2
© www.soinside.com 2019 - 2024. All rights reserved.