我目前正在使用 Ubuntu 23 和 RTX 3060 Ti GPU 开展一个机器学习项目。我已经成功安装了 CUDA 11.8,并且 PyTorch 运行完美,识别和利用 GPU 没有任何问题。
但是,我在使用 TensorFlow 时遇到了困难。尽管安装了
tensorflow[and-cuda]
包,TensorFlow 似乎无法识别 GPU。我已检查并确认 CUDA 工具包和 cuDNN 已正确安装:
2023-11-19 10:04:21.667041: I tensorflow/core/platform/cpu_feature_guard.cc:182] This TensorFlow binary is optimized to use available CPU instructions in performance-critical operations.
To enable the following instructions: AVX2 FMA, in other operations, rebuild TensorFlow with the appropriate compiler flags.
2023-11-19 10:04:22.398453: W tensorflow/compiler/tf2tensorrt/utils/py_utils.cc:38] TF-TRT Warning: Could not find TensorRT
2023-11-19 10:04:23.228798: I tensorflow/compiler/xla/stream_executor/cuda/cuda_gpu_executor.cc:995] successful NUMA node read from SysFS had negative value (-1), but there must be at least one NUMA node, so returning NUMA node zero. See more at https://github.com/torvalds/linux/blob/v6.0/Documentation/ABI/testing/sysfs-bus-pci#L344-L355
2023-11-19 10:04:23.257218: W tensorflow/core/common_runtime/gpu/gpu_device.cc:1960] Cannot dlopen some GPU libraries. Please make sure the missing libraries mentioned above are installed properly if you would like to use GPU. Follow the guide at https://www.tensorflow.org/install/gpu for how to download and setup the required libraries for your platform.
Skipping registering GPU devices...
当我尝试时:
from tensorflow.python.client import device_lib
print(device_lib.list_local_devices())
仅退货:
[name: "/device:CPU:0"
device_type: "CPU"
memory_limit: 268435456
locality {
}
incarnation: 5180605997013099642
xla_global_id: -1
]
相关详情如下:
PyTorch 在 GPU 支持下平稳运行,但 TensorFlow 拒绝承认 GPU 的存在。
我遵循了通常的安装步骤,包括安装 CUDA 工具包、cuDNN 和支持 GPU 的 TensorFlow 包。但是,当我运行 TensorFlow 代码时,它默认由 CPU 执行。
任何有关解决此问题的见解或指导将不胜感激。如果我可能缺少在带有 RTX 3060 Ti 的 Ubuntu 23 上使用 TensorFlow 的具体步骤或配置,请告诉我。
预先感谢您的协助!
访问此网站:[https://www.tensorflow.org/install/source][1] 并滚动到 GPU 部分,您将看到与您的 cuda 和 cudnn 相关的版本