在cuda machin上为cpu训练的fastai ulmfit模型

问题描述 投票:1回答:1

我有一个已经在cuda机器上训练的export.pkl模型。我想在Macbook上使用它:

from fastai.text import load_learner
from utils import get_corpus

learner = load_learner('./models')
corpus = get_corpus()

res = [ str(learner.predict(c)[0]) for c in corpus ]

我收到以下错误:

  ...
  File "/Users/gautiergilabert/Envs/cc/lib/python3.7/site-packages/torch/nn/parallel/data_parallel.py", line 146, in forward
    "them on device: {}".format(self.src_device_obj, t.device))
RuntimeError: module must have its parameters and buffers on device cuda:0 (device_ids[0]) but found one of them on device: cpu

我有两个问题:

  • 我在raise中找到了export.pkl
for t in chain(self.module.parameters(), self.module.buffers()):
    if t.device != self.src_device_obj:
        raise RuntimeError("module must have its parameters and buffers "
                           "on device {} (device_ids[0]) but found one of "
                           "them on device: {}".format(self.src_device_obj, t.device))

关于文档字符串中的模块:module to be parallelized。我真的不明白这是什么。我的Macbook?

除了我的Macbook,我想在CPU上运行模型

  • 是否有办法使此export.pkl模型在CPU上工作?
  • 是否有办法在cuda上添加另一个export.pkl并使其在cpu上可用?

谢谢

gpu cpu fast-ai
1个回答
0
投票

一种方法是通过为模型指定空数据集并随后加载模型权重来加载学习者。对于resnet图像分类器,类似这样的方法应该起作用:

from fastai.vision import *

# path where the model is saved under path/models/model-name
path = "model_path"

tfms = get_transforms()
data = ImageDataBunch.single_from_classes(".", classes=["class1", "class2"], ds_tfms=tfms)

learner = cnn_learner(data, models.resnet34, metrics=accuracy)
# loads model from model_path/models/model_name.pth
learner.load("model_name")

image = open_image("test.jpg")
pred_class, pred_idx, outputs = learner.predict(image)
© www.soinside.com 2019 - 2024. All rights reserved.