导出Detectron2模型

问题描述 投票:0回答:1

我正在尝试从使用 detectorron2 的 panoptic-deeplab 项目中导出模型。我想将其导出为 .pt,以便稍后可以将其加载到 LibTorch 中。
我想使用 DefaultPredictor 预测单个图像的全景分割,跟踪它,然后使用 torch.jit.trace 保存它
我知道 detectorron2 存储库上有一个部署示例,但不幸的是,它是一个使用 TorchScript 格式的 Mask R-CNN 模型运行推理的示例。
如果有人知道我的方法可能做错了什么,或者关于我应该修改我的代码的一些建议,或者我应该如何导出进行全景分割的模型。
如果需要更多信息,请告诉我。
这是我到目前为止得到的相关代码

class DefaultPredictor:
    def __init__(self, cfg):
        self.cfg = cfg.clone()  # cfg can be modified by model
        self.model = build_model(self.cfg)
        self.model.eval()
        if len(cfg.DATASETS.TEST):
            self.metadata = MetadataCatalog.get(cfg.DATASETS.TEST[0])
    
        checkpointer = DetectionCheckpointer(self.model)
        checkpointer.load(cfg.MODEL.WEIGHTS)
    
        self.aug = T.ResizeShortestEdge(
            [cfg.INPUT.MIN_SIZE_TEST, cfg.INPUT.MIN_SIZE_TEST], cfg.INPUT.MAX_SIZE_TEST
        )
    
        self.input_format = cfg.INPUT.FORMAT
        assert self.input_format in ["RGB", "BGR"], self.input_format
    
    def __call__(self, original_image):
        
        with torch.no_grad(): 
            if self.input_format == "RGB":
                original_image = original_image[:, :, ::-1]
            height, width = original_image.shape[:2]
            image = self.aug.get_transform(original_image).apply_image(original_image)
            image = torch.as_tensor(image.astype("float32").transpose(2, 0, 1)).unsqueeze(0)
            print(image)
            print(image.shape)
            image.to(self.cfg.MODEL.DEVICE)
            inputs = {"image": image, "height": height, "width": width}
            
            predictions = self.model([inputs])[0]
            self.model = self.model.to(self.cfg.MODEL.DEVICE)
    
            traced_model = torch.jit.trace(self.model, image, strict=False)
            torch.jit.save(traced_model, "/home/model.pt")
            return predictions

作为配置文件,我使用panoptic_fpn_R_50_inference_acc_test.yaml,可以在Detectron项目的quick_schedules模块中找到
但是我收到这个错误:

File “/home/panoptic-deeplab/tools_d2/export_model.py”, line 236, in 

main() # pragma: no cover

File “/home/panoptic-deeplab/tools_d2/export_model.py”, line 219, in main

predictions = predictor(img)

File “/home/.local/lib/python3.10/site-packages/detectron2/engine/defaults.py”, line 327, in 
call

predictions = self.model([inputs])[0]

File “/home/.local/lib/python3.10/site-packages/torch/nn/modules/module.py”, line 1518, in _wrapped_call_impl

return self._call_impl(*args, **kwargs)

File “/home/.local/lib/python3.10/site-packages/torch/nn/modules/module.py”, line 1527, in _call_impl

return forward_call(*args, **kwargs)

File “/home/.local/lib/python3.10/site-packages/detectron2/modeling/meta_arch/panoptic_fpn.py”, line 115, in forward

return self.inference(batched_inputs)

File “/home/.local/lib/python3.10/site-packages/detectron2/modeling/meta_arch/panoptic_fpn.py”, line 154, in inference

features = self.backbone(images.tensor)

File “/home/anamudura/.local/lib/python3.10/site-packages/torch/nn/modules/module.py”, line 1518, in _wrapped_call_impl

return self._call_impl(*args, **kwargs)

File “/home/.local/lib/python3.10/site-packages/torch/nn/modules/module.py”, line 1527, in _call_impl

return forward_call(*args, **kwargs)

File “/home/.local/lib/python3.10/site-packages/detectron2/modeling/backbone/fpn.py”, line 139, in forward

bottom_up_features = self.bottom_up(x)

File “/home/.local/lib/python3.10/site-packages/torch/nn/modules/module.py”, line 1518, in _wrapped_call_impl

return self._call_impl(*args, **kwargs)

File “/home/.local/lib/python3.10/site-packages/torch/nn/modules/module.py”, line 1527, in _call_impl

return forward_call(*args, **kwargs)

File “/home/.local/lib/python3.10/site-packages/detectron2/modeling/backbone/resnet.py”, line 443, in forward

assert x.dim() == 4, f"ResNet takes an input of shape (N, C, H, W). Got {x.shape} instead!"

AssertionError: ResNet takes an input of shape (N, C, H, W). Got torch.Size([1, 1, 3, 800, 1280]) instead!

pytorch export jit detectron
1个回答
0
投票

您在使用 Detectron2 跟踪全景分割模型时遇到错误。此错误消息表明传递到 ResNet 主干的输入张量形状不是预期的格式。 ResNet 模型需要尺寸为 (N、C、H、W) 的输入,其中 N 是批量大小,C 是通道数(RGB 为 3),H 是高度,W 是宽度。但是,您提供的输入张量的形状似乎为 [1, 1, 3, 800, 1280],这是不正确的。

© www.soinside.com 2019 - 2024. All rights reserved.