我该怎么做才能转换模型,以便只能在CPU上使用它?

问题描述 投票:1回答:1
它是这样转换的:

with tf.Graph().as_default(): with tf.Session() as sess: graph = sess.graph K.set_session(sess) K.set_learning_phase(0) inference_model = create_model(num_classes=num_classes) load_model() # Find output nodes outputs, output_node_list = get_nodes_from_model(inference_model.outputs) # find input nodes inputs, input_node_list = get_nodes_from_model(inference_model.inputs) generate_config() with sess.as_default(): freeze_var_names = list(set(v.op.name for v in tf.global_variables()).difference(None or [])) output_names = output_node_list or [] output_names += [v.op.name for v in tf.global_variables()] input_graph_def = graph.as_graph_def() for node in input_graph_def.node: # print(node.name) node.device = "" frozen_graph = tf.compat.v1.graph_util.convert_variables_to_constants( sess, input_graph_def, output_names, freeze_var_names) trt_graph = trt.create_inference_graph( # frozen model input_graph_def=frozen_graph, outputs=output_node_list, # specify the max workspace max_workspace_size_bytes=500000000, # precision, can be "FP32" (32 floating point precision) or "FP16" precision_mode=precision, is_dynamic_op=True) # Finally we serialize and dump the output graph to the filesystem with tf.gfile.GFile(model_save_path, 'wb') as f: f.write(trt_graph.SerializeToString()) print("TensorRT model is successfully stored! \n")

is_dynamic_op=True已经帮助转换了模型(现在它说已经成功存储了),但是我仍然无法将其加载到Docker TensorRT服务器中。 

我正在使用nvcr.io/nvidia/tensorflow:19.10-py3容器为TensorRT服务器转换模型和nvcr.io/nvidia/tensorrtserver:19.10-py3容器。

我有一个要在CPU上运行的.h5模型(用于GPU?)。我使用python转换了模型,看起来好像真的进行了转换,但是在docker tensorrt中运行它时,出现错误:...

tensorflow cpu tensorrt
1个回答
© www.soinside.com 2019 - 2024. All rights reserved.