使用Estimator接口进行推理,采用预训练的张量流对象检测模型

问题描述 投票:1回答:1

我正在尝试从Tensorflow Object Detection repo中加载预训练的张量流物体检测模型作为tf.estimator.Estimator并用它来进行预测。

我能够加载模型并使用Estimator.predict()运行推理,但输出是垃圾。其他加载模型的方法,例如作为一个Predictor,运行推理工作正常。

任何帮助正确加载模型作为Estimator调用predict()将非常感激。我目前的代码:

加载并准备图像

def load_image_into_numpy_array(image):
    (im_width, im_height) = image.size
    return np.array(list(image.getdata())).reshape((im_height, im_width, 3)).astype(np.uint8)

image_url = 'https://i.imgur.com/rRHusZq.jpg'

# Load image
response = requests.get(image_url)
image = Image.open(BytesIO(response.content))

# Format original image size
im_size_orig = np.array(list(image.size) + [1])
im_size_orig = np.expand_dims(im_size_orig, axis=0)
im_size_orig = np.int32(im_size_orig)

# Resize image
image = image.resize((np.array(image.size) / 4).astype(int))

# Format image
image_np = load_image_into_numpy_array(image)
image_np_expanded = np.expand_dims(image_np, axis=0)
image_np_expanded = np.float32(image_np_expanded)

# Stick into feature dict
x = {'image': image_np_expanded, 'true_image_shape': im_size_orig}

# Stick into input function
predict_input_fn = tf.estimator.inputs.numpy_input_fn(
    x=x,
    y=None,
    shuffle=False,
    batch_size=128,
    queue_capacity=1000,
    num_epochs=1,
    num_threads=1,
)

边注:

train_and_eval_dict似乎也包含一个用于预测的input_fn

train_and_eval_dict['predict_input_fn']

然而,这实际上返回一个tf.estimator.export.ServingInputReceiver,我不知道该怎么办。这可能是我的问题的根源,因为在模型实际看到图像之前涉及相当多的预处理。

加载模型为Estimator

模型从TF Model Zoo here下载,代码加载来自here的模型。

model_dir = './pretrained_models/tensorflow/ssd_mobilenet_v1_coco_2018_01_28/'
pipeline_config_path = os.path.join(model_dir, 'pipeline.config')

config = tf.estimator.RunConfig(model_dir=model_dir)

train_and_eval_dict = model_lib.create_estimator_and_inputs(
    run_config=config,
    hparams=model_hparams.create_hparams(None),
    pipeline_config_path=pipeline_config_path,
    train_steps=None,
    sample_1_of_n_eval_examples=1,
    sample_1_of_n_eval_on_train_examples=(5))

estimator = train_and_eval_dict['estimator']

运行推理

output_dict1 = estimator.predict(predict_input_fn)

这会打印出一些日志消息,其中一条是:

INFO:tensorflow:Restoring parameters from ./pretrained_models/tensorflow/ssd_mobilenet_v1_coco_2018_01_28/model.ckpt

所以似乎预先训练好的重量正在加载。但结果如下:

Image with bad detections

加载与Predictor相同的型号

from tensorflow.contrib import predictor

model_dir = './pretrained_models/tensorflow/ssd_mobilenet_v1_coco_2018_01_28'
saved_model_dir = os.path.join(model_dir, 'saved_model')
predict_fn = predictor.from_saved_model(saved_model_dir)

运行推理

output_dict2 = predict_fn({'inputs': image_np_expanded})

结果看起来不错:

enter image description here

tensorflow object-detection object-detection-api
1个回答
1
投票

当您将模型作为估算器和检查点文件加载时,以下是与ssd模型关联的恢复功能。来自ssd_meta_arch.py

def restore_map(self,
                  fine_tune_checkpoint_type='detection',
                  load_all_detection_checkpoint_vars=False):
    """Returns a map of variables to load from a foreign checkpoint.
    See parent class for details.
    Args:
      fine_tune_checkpoint_type: whether to restore from a full detection
        checkpoint (with compatible variable names) or to restore from a
        classification checkpoint for initialization prior to training.
        Valid values: `detection`, `classification`. Default 'detection'.
      load_all_detection_checkpoint_vars: whether to load all variables (when
         `fine_tune_checkpoint_type='detection'`). If False, only variables
         within the appropriate scopes are included. Default False.
    Returns:
      A dict mapping variable names (to load from a checkpoint) to variables in
      the model graph.
    Raises:
      ValueError: if fine_tune_checkpoint_type is neither `classification`
        nor `detection`.
    """
    if fine_tune_checkpoint_type not in ['detection', 'classification']:
      raise ValueError('Not supported fine_tune_checkpoint_type: {}'.format(
          fine_tune_checkpoint_type))

    if fine_tune_checkpoint_type == 'classification':
      return self._feature_extractor.restore_from_classification_checkpoint_fn(
          self._extract_features_scope)

    if fine_tune_checkpoint_type == 'detection':
      variables_to_restore = {}
      for variable in tf.global_variables():
        var_name = variable.op.name
        if load_all_detection_checkpoint_vars:
          variables_to_restore[var_name] = variable
        else:
          if var_name.startswith(self._extract_features_scope):
            variables_to_restore[var_name] = variable

    return variables_to_restore

正如您所看到的,即使配置文件设置了from_detection_checkpoint: True,也只会恢复特征提取器范围中的变量。要恢复所有变量,您必须进行设置

load_all_detection_checkpoint_vars: True

在配置文件中。

所以,上述情况非常清楚。当将模型加载为Estimator时,只会恢复特征提取器范围中的变量,并且不会恢复预测变量的范围权重,估计器显然会给出随机预测。

当将模型加载为预测器时,加载所有权重,因此预测是合理的。

© www.soinside.com 2019 - 2024. All rights reserved.