如何使用 AWS SageMaker Notebook 实例部署预训练模型?

问题描述 投票:0回答:1

我有一个预训练的模型,正在从 S3 存储桶加载到 AWS SageMaker Notebook 实例中,并在提供用于从 S3 存储桶进行预测的测试图像后,它会根据需要提供准确的结果。我想部署它,以便拥有一个端点,可以进一步与 AWS Lambda Function 和 AWS API GateWay 集成,以便我可以在实时应用程序中使用该模型。 知道如何从 AWS Sagemaker Notebook 实例部署模型并获取其端点吗? 下面给出

.ipynb
文件内的代码供参考。

import boto3
import pandas as pd
import sagemaker
#from sagemaker import get_execution_role
from skimage.io import imread
from skimage.transform import resize
import numpy as np
from keras.models import load_model
import os
import time
import json
#role = get_execution_role()
role = sagemaker.get_execution_role()

bucketname = 'bucket' # bucket where the model is hosted
filename = 'test_model.h5' # name of the model
s3 = boto3.resource('s3')
image= s3.Bucket(bucketname).download_file(filename, 'test_model_new.h5')
model= 'test_model_new.h5'

model = load_model(model)

bucketname = 'bucket' # name of the bucket where the test image is hosted
filename = 'folder/image.png' # prefix
s3 = boto3.resource('s3')
file= s3.Bucket(bucketname).download_file(filename, 'image.png')
file_name='image.png'

test=np.array([resize(imread(file_name), (137, 310, 3))])

test_predict = model.predict(test)

print ((test_predict > 0.5).astype(np.int))
amazon-web-services amazon-s3 aws-lambda amazon-sagemaker
1个回答
4
投票

这是对我有用的解决方案。只需按照以下步骤操作即可。

1 - 在

的帮助下将模型加载到 SageMaker 的 jupyter 环境中
from keras.models import load_model

model = load_model (<Your Model name goes here>) #In my case it's model.h5

2 - 现在模型已加载,在

 的帮助下将其转换为 
protobuf format
 所需的 
AWS

def convert_h5_to_aws(loaded_model):

from tensorflow.python.saved_model import builder
from tensorflow.python.saved_model.signature_def_utils import predict_signature_def
from tensorflow.python.saved_model import tag_constants

model_version = '1'
export_dir = 'export/Servo/' + model_version
# Build the Protocol Buffer SavedModel at 'export_dir'
builder = builder.SavedModelBuilder(export_dir)
# Create prediction signature to be used by TensorFlow Serving Predict API
signature = predict_signature_def(
    inputs={"inputs": loaded_model.input}, outputs={"score": loaded_model.output})
from keras import backend as K

with K.get_session() as sess:
    # Save the meta graph and variables
    builder.add_meta_graph_and_variables(
        sess=sess, tags=[tag_constants.SERVING], signature_def_map={"serving_default": signature})
    builder.save()
import tarfile
with tarfile.open('model.tar.gz', mode='w:gz') as archive:
    archive.add('export', recursive=True)
import sagemaker

sagemaker_session = sagemaker.Session()
inputs = sagemaker_session.upload_data(path='model.tar.gz', key_prefix='model')
convert_h5_to_aws(model):

3 - 现在您可以在

的帮助下部署您的模型
!touch train.py
from sagemaker.tensorflow.model import TensorFlowModel
sagemaker_model = TensorFlowModel(model_data = 's3://' + sagemaker_session.default_bucket() + '/model/model.tar.gz',
                                  role = role,
                                  framework_version = '1.15.2',
                                  entry_point = 'train.py')
%%timelog
predictor = sagemaker_model.deploy(initial_instance_count=1,
                                   instance_type='ml.m4.xlarge')

这将生成可以在 Amazon SageMaker 的推理部分中看到的端点,并且在该端点的帮助下,您现在可以从 jupyter 笔记本以及 Web 和移动应用程序进行预测。 Liam 的这个 YouTube 教程 和 Priya 的 AWS 博客 对我帮助很大。

© www.soinside.com 2019 - 2024. All rights reserved.