我已经在自定义数据集上训练了yolov9模型用于实例分割,现在我想在分割后获得分割区域。
类似于下图所示的输出,但针对图像中分割的每个对象。
from pathlib import Path
import numpy as np
import torch
import cv2
model = torch.hub.load('.', 'custom', path='yolov9-inst/runs/train-seg/gelan-c-seg15/weights/best.pt', source='local')
# Image
img = 'WALL-INSTANCEE-2/test/images/5a243513a69b150001f56c31_emptyroom6_jpeg_jpg.rf.7aa8f6a9aefbb1c76adc60a7b392dcd6.jpg'
# Inference
res = model(img)
# Iterate detection results (helpful for multiple images)
for r in res:
img = np.copy(r.orig_img)
img_name = Path(r.path).stem # source image base-name
# Iterate each object contour (multiple detections)
for ci, c in enumerate(r):
# Get detection class name
label = c.names[c.boxes.cls.tolist().pop()]
# Create binary mask
b_mask = np.zeros(img.shape[:2], np.uint8)
# Extract contour result
contour = c.masks.xy.pop()
# Changing the type
contour = contour.astype(np.int32)
# Reshaping
contour = contour.reshape(-1, 1, 2)
# Draw contour onto mask
_ = cv2.drawContours(b_mask, [contour], -1, (255, 255, 255), cv2.FILLED)
但是我在仅查找资源时遇到此错误。
YOLO 🚀 v0.1-104-g5b1ea9a Python-3.10.12 torch-2.1.0+cu118 CUDA:0 (NVIDIA RTX A5000, 24248MiB)
Fusing layers...
gelan-c-seg-custom summary: 414 layers, 27364441 parameters, 0 gradients, 144.2 GFLOPs
WARNING ⚠️ YOLO SegmentationModel is not yet AutoShape compatible. You will not be able to run inference with this model.
---------------------------------------------------------------------------
AttributeError Traceback (most recent call last)
Cell In[84], line 6
4 img = 'WALL-INSTANCEE-2/test/images/5a243513a69b150001f56c31_emptyroom6_jpeg_jpg.rf.7aa8f6a9aefbb1c76adc60a7b392dcd6.jpg'
5 # Inference
----> 6 results = model(img)
File /usr/local/lib/python3.10/dist-packages/torch/nn/modules/module.py:1518, in Module._wrapped_call_impl(self, *args, **kwargs)
1516 return self._compiled_call_impl(*args, **kwargs) # type: ignore[misc]
1517 else:
-> 1518 return self._call_impl(*args, **kwargs)
File /usr/local/lib/python3.10/dist-packages/torch/nn/modules/module.py:1527, in Module._call_impl(self, *args, **kwargs)
1522 # If we don't have any hooks, we want to skip the rest of the logic in
1523 # this function, and just call forward.
1524 if not (self._backward_hooks or self._backward_pre_hooks or self._forward_hooks or self._forward_pre_hooks
1525 or _global_backward_pre_hooks or _global_backward_hooks
1526 or _global_forward_hooks or _global_forward_pre_hooks):
-> 1527 return forward_call(*args, **kwargs)
1529 try:
1530 result = None
File /workspace/yolov9-inst/./models/common.py:868, in DetectMultiBackend.forward(self, im, augment, visualize)
866 def forward(self, im, augment=False, visualize=False):
867 # YOLO MultiBackend inference
--> 868 b, ch, h, w = im.shape # batch, channel, height, width
869 if self.fp16 and im.dtype != torch.float16:
870 im = im.half() # to FP16
AttributeError: 'str' object has no attribute 'shape'
请谁能帮我解决这个问题
模型需要 NumPy 数组或张量形式的图像,但您直接将文件路径作为字符串传递。您需要先从文件中读取图像,然后将其传递给模型。