ICode9

精准搜索请尝试: 精确搜索
首页 > 其他分享> 文章详细

MMDeploy安装笔记

2022-05-01 20:31:19  阅读:372  来源: 互联网

标签:cmake MMDeploy onnxruntime 笔记 build install 安装 DIR


MMDeploy的onnxruntime教程

  • 参考官方教程

A From-scratch Example

Here is an example of how to deploy and inference Faster R-CNN model of MMDetection from scratch.

step1: 创建虚拟环境并且安装MMDetection

Create Virtual Environment and Install MMDetection.

Please run the following command in Anaconda environment to install MMDetection.

conda create -n openmmlab python=3.7 -y
conda activate openmmlab

conda install pytorch==1.8.0 torchvision==0.9.0 cudatoolkit=10.2 -c pytorch -y

# install mmcv
pip install mmcv-full==1.4.0 -f https://download.openmmlab.com/mmcv/dist/cu102/torch1.8/index.html

# install mmdetection
git clone https://github.com/open-mmlab/mmdetection.git
cd mmdetection
pip install -r requirements/build.txt
pip install -v -e .

step2: 下载MMDetectin中训练好的权重

Download the Checkpoint of Faster R-CNN

Download the checkpoint from this link and put it in the {MMDET_ROOT}/checkpoints where {MMDET_ROOT} is the root directory of your MMDetection codebase.

step3: 安装MMDeploy和ONNX Runtime

Install MMDeploy and ONNX Runtime

step3-1: 安装MMDeploy

Please run the following command in Anaconda environment to install MMDeploy.

conda activate openmmlab

git clone https://github.com/open-mmlab/mmdeploy.git
cd mmdeploy
git submodule update --init --recursive
pip install -e .  # 安装MMDeploy
step3-2a: 下载onnxruntime

Once we have installed the MMDeploy, we should select an inference engine for model inference. Here we take ONNX Runtime as an example. Run the following command to install ONNX Runtime:

pip install onnxruntime==1.8.1

Then download the ONNX Runtime library to build the mmdeploy plugin for ONNX Runtime:

step3-2b: 制作onnxruntime的插件(模型转换会需要)
wget https://github.com/microsoft/onnxruntime/releases/download/v1.8.1/onnxruntime-linux-x64-1.8.1.tgz

tar -zxvf onnxruntime-linux-x64-1.8.1.tgz
cd onnxruntime-linux-x64-1.8.1
export ONNXRUNTIME_DIR=$(pwd)
export LD_LIBRARY_PATH=$ONNXRUNTIME_DIR/lib:$LD_LIBRARY_PATH  # 也可将这两句写进~/.bashrc


cd ${MMDEPLOY_DIR} # To MMDeploy root directory
mkdir -p build && cd build

# build ONNXRuntime custom ops
cmake -DMMDEPLOY_TARGET_BACKENDS=ort -DONNXRUNTIME_DIR=${ONNXRUNTIME_DIR} ..
make -j$(nproc)
step3-2c: build MMDeploy SDK(使用C的接口会用到)
# build MMDeploy SDK
cmake -DMMDEPLOY_BUILD_SDK=ON \
      -DCMAKE_CXX_COMPILER=g++-7 \
      -DOpenCV_DIR=/path/to/OpenCV/lib/cmake/OpenCV \
      -Dspdlog_DIR=/path/to/spdlog/lib/cmake/spdlog \
      -DONNXRUNTIME_DIR=${ONNXRUNTIME_DIR} \
      -DMMDEPLOY_TARGET_BACKENDS=ort \
      -DMMDEPLOY_CODEBASES=mmdet ..
make -j$(nproc) && make install

# build MMDeploy SDK具体案例
cmake -DMMDEPLOY_BUILD_SDK=ON \
-DCMAKE_CXX_COMPILER=g++-7 \
-DOpenCV_DIR=/usr/lib/x86_64-linux-gnu/cmake/opencv4 \  # 通过apt-get安装的
-Dspdlog_DIR=/usr/lib/x86_64-linux-gnu/cmake/spdlog \
-DONNXRUNTIME_DIR=${ONNXRUNTIME_DIR} \
-DMMDEPLOY_TARGET_BACKENDS=ort \
-DMMDEPLOY_CODEBASES=mmdet ..

# 其中${MMDEPLOY_DIR} ${MMDET_DIR} ${ONNXRUNTIME_DIR}都可以写在 ~/.bashrc里面然后source ~/.bashrc生效
补充: 验证后端和插件是否安装成功
python ${MMDEPLOY_DIR}/tools/check_env.py

step4: Model Conversion

Once we have installed MMDetection, MMDeploy, ONNX Runtime and built plugin for ONNX Runtime, we can convert the Faster R-CNN to a .onnx model file which can be received by ONNX Runtime. Run following commands to use our deploy tools:

# Assume you have installed MMDeploy in ${MMDEPLOY_DIR} and MMDetection in ${MMDET_DIR}
# If you do not know where to find the path. Just type `pip show mmdeploy` and `pip show mmdet` in your console.

python ${MMDEPLOY_DIR}/tools/deploy.py \
    ${MMDEPLOY_DIR}/configs/mmdet/detection/detection_onnxruntime_dynamic.py \
    ${MMDET_DIR}/configs/faster_rcnn/faster_rcnn_r50_fpn_1x_coco.py \
    ${MMDET_DIR}/checkpoints/faster_rcnn_r50_fpn_1x_coco_20200130-047c8118.pth \
    ${MMDET_DIR}/demo/demo.jpg \
    --work-dir work_dirs \  # 转换好的模型保存目录
    --device cpu \
    --show \  # 展示使用后端推理框架,和原来pytorch推理的两张图片
    --dump-info  # 输出方便,可用与SDK
    
# 补充    
# ${MMDEPLOY_DIR}和${MMDET_DIR}已经写进了~/.bashrc
# 转换好了模型可以通过python接口进行推理
例如: Inference Model
Now you can do model inference with the APIs provided by the backend. But what if you want to test the model instantly? We have some backend wrappers for you.

from mmdeploy.apis import inference_model

result = inference_model(model_cfg, deploy_cfg, backend_files, img=img, device=device)

If the script runs successfully, two images will display on the screen one by one. The first image is the infernce result of ONNX Runtime and the second image is the result of PyTorch. At the same time, an onnx model file end2end.onnx and three json files (SDK config files) will generate on the work directory work_dirs.

step5: Run MMDeploy SDK demo

After model conversion, SDK Model is saved in directory ${work_dir}.
Here is a recipe for building & running object detection demo.

cd build/install/example

# path to onnxruntime ** libraries **
export LD_LIBRARY_PATH=/path/to/onnxruntime/lib
# 例子: export LD_LIBRARY_PATH=/home/zranguai/Deploy/Backend/ONNXRuntime/onnxruntime-linux-x64-1.8.1/lib

mkdir -p build && cd build
cmake -DOpenCV_DIR=path/to/OpenCV/lib/cmake/OpenCV \
      -DMMDeploy_DIR=${MMDEPLOY_DIR}/build/install/lib/cmake/MMDeploy ..
make object_detection

# 例子:
# cmake -DOpenCV_DIR=/usr/lib/x86_64-linux-gnu/cmake/opencv4 \
#       -DMMDeploy_DIR=${MMDEPLOY_DIR}/build/install/lib/cmake/MMDeploy ..


# suppress verbose logs
export SPDLOG_LEVEL=warn

# running the object detection example
./object_detection cpu ${work_dirs} ${path/to/an/image}
# 例子: ./object_detection cpu ${MMDEPLOY_DIR}/work_dirs ${MMDET_DIR}/demo/demo.jpg

If the demo runs successfully, an image named "output_detection.png" is supposed to be found showing detection objects.

标签:cmake,MMDeploy,onnxruntime,笔记,build,install,安装,DIR
来源: https://www.cnblogs.com/zranguai/p/16213922.html

本站声明: 1. iCode9 技术分享网(下文简称本站)提供的所有内容,仅供技术学习、探讨和分享;
2. 关于本站的所有留言、评论、转载及引用,纯属内容发起人的个人观点,与本站观点和立场无关;
3. 关于本站的所有言论和文字,纯属内容发起人的个人观点,与本站观点和立场无关;
4. 本站文章均是网友提供,不完全保证技术分享内容的完整性、准确性、时效性、风险性和版权归属;如您发现该文章侵犯了您的权益,可联系我们第一时间进行删除;
5. 本站为非盈利性的个人网站,所有内容不会用来进行牟利,也不会利用任何形式的广告来间接获益,纯粹是为了广大技术爱好者提供技术内容和技术思想的分享性交流网站。

专注分享技术,共同学习,共同进步。侵权联系[81616952@qq.com]

Copyright (C)ICode9.com, All Rights Reserved.

ICode9版权所有