我正在尝试让我的 GPU 在我的 WSL 环境和我的 Docker 容器中可用。我已遵循 Microsoft/NVIDIA 指南,但似乎不起作用。没有明显的错误,但我的 GPU 也没有被发现。
当我尝试运行 CUDA nbody 模拟时,我得到以下输出 命令
docker run --rm -it --gpus=all nvcr.io/nvidia/k8s/cuda-sample:nbody nbody -gpu -benchmark
输出
Run "nbody -benchmark [-numbodies=<numBodies>]" to measure performance.
-fullscreen (run n-body simulation in fullscreen mode)
-fp64 (use double precision floating point values for simulation)
-hostmem (stores simulation data in host memory)
-benchmark (run benchmark to measure performance)
-numbodies=<N> (number of bodies (>= 1) to run in simulation)
-device=<d> (where d=0,1,2.... for the CUDA device to use)
-numdevices=<i> (where i=(number of CUDA devices > 0) to use for simulation)
-compare (compares simulation results running once on the default GPU and once on the CPU)
-cpu (run n-body simulation on the CPU)
-tipsy=<file.bin> (load a tipsy model file for simulation)
NOTE: The CUDA Samples are not meant for performance measurements. Results may vary when GPU Boost is enabled.
Error: only 0 Devices available, 1 requested. Exiting.
这是
sudo lshw -C display
的输出
*-display:0
description: 3D controller
product: Microsoft Corporation
vendor: Microsoft Corporation
physical id: 3
bus info: pci@6a6c:00:00.0
version: 00
width: 32 bits
clock: 33MHz
capabilities: bus_master cap_list
configuration: driver=dxgkrnl latency=0
resources: irq:0
*-display:1
description: 3D controller
product: Microsoft Corporation
vendor: Microsoft Corporation
physical id: 4
bus info: pci@8741:00:00.0
version: 00
width: 32 bits
clock: 33MHz
capabilities: bus_master cap_list
configuration: driver=dxgkrnl latency=0
resources: irq:0
我的目标是能够使用我的 GPU 通过 docker 运行
ollama
。
发现这个问题后https://github.com/microsoft/WSL/issues/10253我尝试将Docker Desktop升级到最新版本。我原来是
4.24
,后来升级到了 4.34.2
。
这次升级解决了我遇到的问题。