为什么高通的QNN x86_64 CPU后端比onnxruntime慢88倍?

问题描述 投票:0回答:1

我是 Qualcomm AI Engine Direct SDK (QNN) 的新手。在直接将 AI 模型部署到高通设备之前,我想看一下 QNN 的 x86_64 后端,这也与 QNN 的量化过程相关。

但是,我发现Qualcomm的QNN x86_64 CPU后端88x比inception_v3的onnxruntime慢。

以下是重现该问题的步骤:

  1. 按照Qualcomm的指示安装QNN SDK。

  2. 下载模型并转换为ONNX

import torch

# Source of model: https://pytorch.org/hub/pytorch_vision_inception_v3/

model = torch.hub.load("pytorch/vision:v0.10.0", "inception_v3", pretrained=True)
model.eval()

x = torch.rand(1, 3, 299, 299)
torch.onnx.export(model, x, "inception_v3.onnx", opset_version=17)
  1. 将模型转换为QNN cpp文件
${QNN_SDK_ROOT}/bin/x86_64-linux-clang/qnn-onnx-converter \
  --input_network inception_v3.onnx \
  --input_dim 'x.1' 1,3,299,299 \
  --out_node '914' \
  --output_path inception_v3.cpp
  1. 编译
mkdir -p model_libs

${QNN_SDK_ROOT}/bin/x86_64-linux-clang/qnn-model-lib-generator \
  -c inception_v3.cpp \
  -b inception_v3.bin \
  -t x86_64-linux-clang \
  -o model_libs
  1. 生成模型输入
import numpy as np

np.random.rand(3, 299, 299).astype(np.float32).tofile("input.raw")

# see rest in Ref: https://github.com/quic/ai-hub-models/issues/17

echo input.raw > input.txt
  1. 通过分析运行模型
${QNN_SDK_ROOT}/bin/x86_64-linux-clang/qnn-net-run \
              --backend ${QNN_SDK_ROOT}/lib/x86_64-linux-clang/libQnnCpu.so \
              --model model_libs/x86_64-linux-clang/libinception_v3.so \
              --input_list input.txt \
              --profiling_level=basic \
              --keep_num_outputs=0 \
              --num_inferences=10

并查看日志

${QNN_SDK_ROOT}/bin/x86_64-linux-clang/qnn-profile-viewer --input_log output/qnn-profiling-data_0.log

这是日志

Input Log File Location: output/qnn-profiling-data_0.log
Log File Created: Thu Sep 26 08:49:41 2024
Time Scale: 1e-06
Epoch Timestamp: 1727340581547093 Steady Clock Timestamp: 1380276319731
Generated using:
qnn-profile-viewer v2.26.0.240827110523_99241
qnn-net-run        v2.26.0.240827110523_99241
Backend            v2.26.0.240827110523_99241

Qnn Init/Prepare/Finalize/De-Init/Execute/Lib-Load Statistics:
------------------------------------------------------------
Init Stats:
-----------
    NetRun: 171679 us

Compose Graphs Stats:
--------------
    NetRun: 95902 us

Finalize Stats:
---------------
Graph 0 (inception_v3):
    NetRun: 75775 us
    Backend (GRAPH_FINALIZE): 75769 us

De-Init Stats:
--------------
    NetRun: 20778 us
    Backend (null): 0 us

Execute Stats (Overall):
------------------------
    NetRun IPS (includes IO and misc. time): 0.5542 inf/sec

Execute Stats (Average):
------------------------
Total Inference Time:
---------------------
Graph 0 (inception_v3):
    NetRun: 1803480 us
    Backend (GRAPH_EXECUTE): 1803294 us

Execute Stats (Min):
------------------------
Total Inference Time:
---------------------
Graph 0 (inception_v3):
    NetRun: 1754020 us
    Backend (GRAPH_EXECUTE): 1753902 us

Execute Stats (Max):
------------------------
Total Inference Time:
---------------------
Graph 0 (inception_v3):
    NetRun: 1895948 us
    Backend (GRAPH_EXECUTE): 1895815 us

我们看到了

Backend (GRAPH_EXECUTE): 1803294 us
。但是,由 ONNXRuntime CPU 提供商运行相同的 onnx

import numpy as np
import onnxruntime

x = np.random.rand(1, 3, 299, 299).astype(np.float32)

session = onnxruntime.InferenceSession(
    "inception_v3.onnx", providers=["CPUExecutionProvider"]
)

outputs = session.run(["914"], input_feed={"x.1": x})

import time

N = 100
t1 = time.time()
for _ in range(N):
    outputs = session.run(["914"], input_feed={"x.1": x})
t2 = time.time()

print(f"average inference time = {(t2 - t1)/N*1000} miliseconds")

输出是

average inference time = 21.910243034362793 miliseconds

所以我想知道为什么 QNN 的 x86_64 CPU 后端明显比 onnxruntime 慢(1803.294 ms vs 21.91ms)?

如有任何帮助,我们将不胜感激。

PS

  • QNN版本:2.26.0.240828
  • 主机:x86_64 Ubuntu 22.04
  • 我检查了从tensorflow转换而来的网络inception_v3,如Qualcomm QNN的教程中所示。结果是一样的。
  • 我还注意到了模型量化的对数:
    62.1ms [  INFO ] [QNN_CPU] QnnGraph execute start
  2086.4ms [  INFO ] [QNN_CPU] QnnGraph execute end

这种缓慢的执行时间与上面 QNN 的执行时间一致。

  • 我还检查了,在将生成的模型cpp文件编译为.so文件时,
    -O3
    被添加到临时文件夹中的
    CXX_FLAGS
    中的
    Makefile.linux-x86_64
    中,由
    qnn-model-lib-generator
    生成。
python onnxruntime qualcomm
1个回答
0
投票

我尝试了和你一样的方法,得到的一切几乎与onnx模型相同,即28ms

© www.soinside.com 2019 - 2024. All rights reserved.