我使用了ultralytics的YOLO V8,我遇到了这个问题

问题描述 投票:0回答:1

这是我尝试运行的代码,但出现错误。

from ultralytics import YOLO
model = YOLO(r"C:\Users\skahi\Downloads\runs-20240406T064851Z-001\runs\detect\train\weights\best.pt")
results = model(r"D:\datasets\A_Z Handwritten Data\testimg1.png", save=True)

输出应该是给定图像的检测次数。

但我收到此错误:

NotImplementedError                       Traceback (most recent call last)
Cell In[10], line 1
----> 1 model(r"D:\datasets\A_Z Handwritten Data\testimg1.png", save=True)

File ~\anaconda3\Lib\site-packages\ultralytics\engine\model.py:169, in Model.__call__(self, source, stream, **kwargs)
    146 def __call__(
    147     self,
    148     source: Union[str, Path, int, list, tuple, np.ndarray, torch.Tensor] = None,
    149     stream: bool = False,
    150     **kwargs,
    151 ) -> list:
    152     """
    153     An alias for the predict method, enabling the model instance to be callable.
    154 
   (...)
    167         (List[ultralytics.engine.results.Results]): A list of prediction results, encapsulated in the Results class.
    168     """
--> 169     return self.predict(source, stream, **kwargs)

File ~\anaconda3\Lib\site-packages\ultralytics\engine\model.py:439, in Model.predict(self, source, stream, predictor, **kwargs)
    437 if prompts and hasattr(self.predictor, "set_prompts"):  # for SAM-type models
    438     self.predictor.set_prompts(prompts)
--> 439 return self.predictor.predict_cli(source=source) if is_cli else self.predictor(source=source, stream=stream)

File ~\anaconda3\Lib\site-packages\ultralytics\engine\predictor.py:168, in BasePredictor.__call__(self, source, model, stream, *args, **kwargs)
    166     return self.stream_inference(source, model, *args, **kwargs)
    167 else:
--> 168     return list(self.stream_inference(source, model, *args, **kwargs))

File ~\anaconda3\Lib\site-packages\torch\utils\_contextlib.py:35, in _wrap_generator.<locals>.generator_context(*args, **kwargs)
     32 try:
     33     # Issuing `None` to a generator fires it up
     34     with ctx_factory():
---> 35         response = gen.send(None)
     37     while True:
     38         try:
     39             # Forward the response to our caller and get its next request

File ~\anaconda3\Lib\site-packages\ultralytics\engine\predictor.py:255, in BasePredictor.stream_inference(self, source, model, *args, **kwargs)
    253 # Postprocess
    254 with profilers[2]:
--> 255     self.results = self.postprocess(preds, im, im0s)
    256 self.run_callbacks("on_predict_postprocess_end")
    258 # Visualize, save, write results

File ~\anaconda3\Lib\site-packages\ultralytics\models\yolo\detect\predict.py:25, in DetectionPredictor.postprocess(self, preds, img, orig_imgs)
     23 def postprocess(self, preds, img, orig_imgs):
     24     """Post-processes predictions and returns a list of Results objects."""
---> 25     preds = ops.non_max_suppression(
     26         preds,
     27         self.args.conf,
     28         self.args.iou,
     29         agnostic=self.args.agnostic_nms,
     30         max_det=self.args.max_det,
     31         classes=self.args.classes,
     32     )
     34     if not isinstance(orig_imgs, list):  # input images are a torch.Tensor, not a list
     35         orig_imgs = ops.convert_torch2numpy_batch(orig_imgs)

File ~\anaconda3\Lib\site-packages\ultralytics\utils\ops.py:282, in non_max_suppression(prediction, conf_thres, iou_thres, classes, agnostic, multi_label, labels, max_det, nc, max_time_img, max_nms, max_wh, in_place, rotated)
    280 else:
    281     boxes = x[:, :4] + c  # boxes (offset by class)
--> 282     i = torchvision.ops.nms(boxes, scores, iou_thres)  # NMS
    283 i = i[:max_det]  # limit detections
    285 # # Experimental
    286 # merge = False  # use merge-NMS
    287 # if merge and (1 < n < 3E3):  # Merge NMS (boxes merged using weighted mean)
   (...)
    294 #     if redundant:
    295 #         i = i[iou.sum(1) > 1]  # require redundancy

File ~\anaconda3\Lib\site-packages\torchvision\ops\boxes.py:41, in nms(boxes, scores, iou_threshold)
     39     _log_api_usage_once(nms)
     40 _assert_has_ops()
---> 41 return torch.ops.torchvision.nms(boxes, scores, iou_threshold)

File ~\anaconda3\Lib\site-packages\torch\_ops.py:755, in OpOverloadPacket.__call__(self, *args, **kwargs)
    750 def __call__(self, *args, **kwargs):
    751     # overloading __call__ to ensure torch.ops.foo.bar()
    752     # is still callable from JIT
    753     # We save the function ptr as the `op` attribute on
    754     # OpOverloadPacket to access it here.
--> 755     return self._op(*args, **(kwargs or {}))

NotImplementedError: Could not run 'torchvision::nms' with arguments from the 'CUDA' backend. This could be because the operator doesn't exist for this backend, or was omitted during the selective/custom build process (if using custom build). If you are a Facebook employee using PyTorch on mobile, please visit https://fburl.com/ptmfixes for possible resolutions. 'torchvision::nms' is only available for these backends: [CPU, Meta, QuantizedCPU, BackendSelect, Python, FuncTorchDynamicLayerBackMode, Functionalize, Named, Conjugate, Negative, ZeroTensor, ADInplaceOrView, AutogradOther, AutogradCPU, AutogradCUDA, AutogradXLA, AutogradMPS, AutogradXPU, AutogradHPU, AutogradLazy, AutogradMeta, Tracer, AutocastCPU, AutocastCUDA, FuncTorchBatched, BatchedNestedTensor, FuncTorchVmapMode, Batched, VmapMode, FuncTorchGradWrapper, PythonTLSSnapshot, FuncTorchDynamicLayerFrontMode, PreDispatch, PythonDispatcher].

CPU: registered at C:\actions-runner\_work\vision\vision\pytorch\vision\torchvision\csrc\ops\cpu\nms_kernel.cpp:112 [kernel]
Meta: registered at /dev/null:440 [kernel]
QuantizedCPU: registered at C:\actions-runner\_work\vision\vision\pytorch\vision\torchvision\csrc\ops\quantized\cpu\qnms_kernel.cpp:124 [kernel]
BackendSelect: fallthrough registered at ..\aten\src\ATen\core\BackendSelectFallbackKernel.cpp:3 [backend fallback]
Python: registered at ..\aten\src\ATen\core\PythonFallbackKernel.cpp:154 [backend fallback]
FuncTorchDynamicLayerBackMode: registered at ..\aten\src\ATen\functorch\DynamicLayer.cpp:498 [backend fallback]
Functionalize: registered at ..\aten\src\ATen\FunctionalizeFallbackKernel.cpp:324 [backend fallback]
Named: registered at ..\aten\src\ATen\core\NamedRegistrations.cpp:7 [backend fallback]
Conjugate: registered at ..\aten\src\ATen\ConjugateFallback.cpp:17 [backend fallback]
Negative: registered at ..\aten\src\ATen\native\NegateFallback.cpp:19 [backend fallback]
ZeroTensor: registered at ..\aten\src\ATen\ZeroTensorFallback.cpp:86 [backend fallback]
ADInplaceOrView: fallthrough registered at ..\aten\src\ATen\core\VariableFallbackKernel.cpp:86 [backend fallback]
AutogradOther: registered at ..\aten\src\ATen\core\VariableFallbackKernel.cpp:53 [backend fallback]
AutogradCPU: registered at ..\aten\src\ATen\core\VariableFallbackKernel.cpp:57 [backend fallback]
AutogradCUDA: registered at ..\aten\src\ATen\core\VariableFallbackKernel.cpp:65 [backend fallback]
AutogradXLA: registered at ..\aten\src\ATen\core\VariableFallbackKernel.cpp:69 [backend fallback]
AutogradMPS: registered at ..\aten\src\ATen\core\VariableFallbackKernel.cpp:77 [backend fallback]
AutogradXPU: registered at ..\aten\src\ATen\core\VariableFallbackKernel.cpp:61 [backend fallback]
AutogradHPU: registered at ..\aten\src\ATen\core\VariableFallbackKernel.cpp:90 [backend fallback]
AutogradLazy: registered at ..\aten\src\ATen\core\VariableFallbackKernel.cpp:73 [backend fallback]
AutogradMeta: registered at ..\aten\src\ATen\core\VariableFallbackKernel.cpp:81 [backend fallback]
Tracer: registered at ..\torch\csrc\autograd\TraceTypeManual.cpp:297 [backend fallback]
AutocastCPU: registered at C:\actions-runner\_work\vision\vision\pytorch\vision\torchvision\csrc\ops\autocast\nms_kernel.cpp:34 [kernel]
AutocastCUDA: registered at C:\actions-runner\_work\vision\vision\pytorch\vision\torchvision\csrc\ops\autocast\nms_kernel.cpp:27 [kernel]
FuncTorchBatched: registered at ..\aten\src\ATen\functorch\LegacyBatchingRegistrations.cpp:720 [backend fallback]
BatchedNestedTensor: registered at ..\aten\src\ATen\functorch\LegacyBatchingRegistrations.cpp:746 [backend fallback]
FuncTorchVmapMode: fallthrough registered at ..\aten\src\ATen\functorch\VmapModeRegistrations.cpp:28 [backend fallback]
Batched: registered at ..\aten\src\ATen\LegacyBatchingRegistrations.cpp:1075 [backend fallback]
VmapMode: fallthrough registered at ..\aten\src\ATen\VmapModeRegistrations.cpp:33 [backend fallback]
FuncTorchGradWrapper: registered at ..\aten\src\ATen\functorch\TensorWrapper.cpp:203 [backend fallback]
PythonTLSSnapshot: registered at ..\aten\src\ATen\core\PythonFallbackKernel.cpp:162 [backend fallback]
FuncTorchDynamicLayerFrontMode: registered at ..\aten\src\ATen\functorch\DynamicLayer.cpp:494 [backend fallback]
PreDispatch: registered at ..\aten\src\ATen\core\PythonFallbackKernel.cpp:166 [backend fallback]
PythonDispatcher: registered at ..\aten\src\ATen\core\PythonFallbackKernel.cpp:158 [backend fallback]

yolov8
1个回答
0
投票

正如@hanna_liavoshka 所提到的,

NotImplementedError: Could not run 'torchvision::nms' with arguments from the 'CUDA' backend

表示 torchvision 的 NMS 操作(非极大值抑制)操作未正确配置为使用 CUDA。确保您的环境中有支持 CUDA 的 torchvision。

© www.soinside.com 2019 - 2024. All rights reserved.