Tensorflow 使用 GPU 时抛出异常

问题描述 投票:0回答:1

我正在尝试加速我用 keras 构建的模型,在 cuda 库版本遇到一些困难后,我设法让张量流来检测我的 GPU。然而现在,当我在检测到 GPU 的情况下运行模型时,它会失败并出现以下回溯:

2021-01-20 17:40:26.549946: W tensorflow/core/common_runtime/bfc_allocator.cc:441] ****___*********____________________________________________________________________________________
Traceback (most recent call last):
  File "model.py", line 72, in <module>
    history = model.fit(x_train, y_train, batch_size=batch_size, epochs=epochs, verbose=2, validation_data=(x_val, y_val))
  File "/home/muke/.local/lib/python3.8/site-packages/tensorflow/python/keras/engine/training.py", line 1100, in fit
    tmp_logs = self.train_function(iterator)
  File "/home/muke/.local/lib/python3.8/site-packages/tensorflow/python/eager/def_function.py", line 828, in __call__
    result = self._call(*args, **kwds)
  File "/home/muke/.local/lib/python3.8/site-packages/tensorflow/python/eager/def_function.py", line 888, in _call
    return self._stateless_fn(*args, **kwds)
  File "/home/muke/.local/lib/python3.8/site-packages/tensorflow/python/eager/function.py", line 2942, in __call__
    return graph_function._call_flat(
  File "/home/muke/.local/lib/python3.8/site-packages/tensorflow/python/eager/function.py", line 1918, in _call_flat
    return self._build_call_outputs(self._inference_function.call(
  File "/home/muke/.local/lib/python3.8/site-packages/tensorflow/python/eager/function.py", line 555, in call
    outputs = execute.execute(
  File "/home/muke/.local/lib/python3.8/site-packages/tensorflow/python/eager/execute.py", line 59, in quick_execute
    tensors = pywrap_tfe.TFE_Py_Execute(ctx._handle, device_name, op_name,
tensorflow.python.framework.errors_impl.ResourceExhaustedError:  SameWorkerRecvDone unable to allocate output tensor. Key: /job:localhost/replica:0/task:0/device:CPU:0;ccc21c10a2feabe0;/job:localhost/replica:0/task:0/device:GPU:0;edge_17_IteratorGetNext;0:0
     [[{{node IteratorGetNext/_2}}]]
Hint: If you want to see a list of allocated tensors when OOM happens, add report_tensor_allocations_upon_oom to RunOptions for current allocation info.
 [Op:__inference_train_function_875]

Function call stack:
train_function

该模型仅在 CPU 上运行良好。

我不确定这是否与版本控制有关,但为了确定我会详细说明情况。我正在运行gentoo,但由于tensorflow包的编译量很大,我通过pip下载了一个二进制包,其版本为

2.4.0
。我已经通过发行版的包管理器安装了最新的
nvidia-cuda-toolkit
包以及
cudnn
,但是当我执行此操作并测试 TensorFlow 是否检测到我的 GPU 时,它说找不到
libcusolver.so.10
,而相反我已经
libcusolver.so.11
通过了最新版本。我尝试降级到具有
libcusolver.so.10
的 cuda 工具包版本,但随后 TensorFlow 会抱怨无法找到其他几个版本 11 库,因此我安装了最新的 cuda 工具包包,但包含在
/opt/cuda/lib64
也将旧的
libcusolver.so.10
文件编入目录。我知道这是一个黑客解决方案,但我不确定如果这就是它所寻找的,我还能做什么。

这是我使用 keras 的完整模型代码:

model = Sequential()
model.add(Conv2D(8, (7,7), activation='relu', input_shape=input_shape))
model.add(MaxPooling2D(pool_size=(2,2)))
model.add(Dropout(0.25))

model.add(Conv2D(16, (7,7), activation='relu'))
model.add(MaxPooling2D(pool_size=(2,2)))
model.add(Dropout(0.25))

model.add(Flatten())

model.add(Dense(64, activation='relu'))
model.add(Dropout(0.25))

model.add(Dense(num_classes, activation='softmax'))

model.summary()

batch_size = 1000
epochs = 100

model.compile(loss=keras.losses.categorical_crossentropy, optimizer=keras.optimizers.Adam(learning_rate=0.001), metrics=['accuracy'])

history = model.fit(x_train, y_train, batch_size=batch_size, epochs=epochs, verbose=2, validation_data=(x_val, y_val))

python tensorflow keras
1个回答
0
投票

这似乎是内存不足的问题。我之前也遇到过类似的问题,通过减小批量大小来解决。

© www.soinside.com 2019 - 2024. All rights reserved.