ICode9

精准搜索请尝试: 精确搜索
首页 > 编程语言> 文章详细

CVPR2021配准算法LoFTR的配置(LoFTR: Detector-Free Local Feature Matching with Transformers)

2021-10-04 23:01:50  阅读:507  来源: 互联网

标签:配准 Transformers cv2 raw LoFTR import img1 img0


 1、论文下载地址:

https://arxiv.org/pdf/2104.00680.pdf

2、代码下载地址:

https://github.com/zju3dv/LoFTR

3、新建虚拟python环境并激活

conda create -n LoFTR python=3.7
source activate LoFTR

4、安装需要的库

pip install torch==1.6.0 einops yacs kornia opencv-python matplotlib

5、下载额外库

在如下网址https://raw.githubusercontent.com/magicleap/SuperGluePretrainedNetwork/master/models/superglue.py

下载superglue.py

然后放到路径src/loftr/utils

6、下载预训练模型

链接:https://pan.baidu.com/s/1dwUDx6A9lRMBkCSowLIz5Q 
提取码:jlcl 
放到路径:weights/outdoor_ds.ckpt

7、demo运行

1)加入工程路径环境变量:

export PYTHONPATH=$PYTHONPATH:/home1/users/XXX/Codes/LoFTR-master/

2)新建mydemo.py写入如下代码:

import os
os.chdir("..")
from copy import deepcopy

# import os
# os.environ["CUDA_VISIBLE_DEVICES"] = "0"

import torch
import cv2
import numpy as np
import matplotlib.cm as cm
from src.utils.plotting import make_matching_figure


from src.loftr import LoFTR, default_cfg

# The default config uses dual-softmax.
# The outdoor and indoor models share the same config.
# You can change the default values like thr and coarse_match_type.

matcher = LoFTR(config=default_cfg)
matcher.load_state_dict(torch.load("/home1/users/XXX/Codes/LoFTR-master/weights/outdoor_ds.ckpt")['state_dict'])
matcher = matcher.eval().cuda()


default_cfg['coarse']

# Load example images
img0_pth = "/home1/users/XXX/Codes/LoFTR-master/assets/phototourism_sample_images/united_states_capitol_26757027_6717084061.jpg"
img1_pth = "/home1/users/XXX/Codes/LoFTR-master/assets/phototourism_sample_images/united_states_capitol_98169888_3347710852.jpg"
img0_raw = cv2.imread(img0_pth, cv2.IMREAD_GRAYSCALE)
img1_raw = cv2.imread(img1_pth, cv2.IMREAD_GRAYSCALE)
img0_raw = cv2.resize(img0_raw, (img0_raw.shape[1]//8*8, img0_raw.shape[0]//8*8))  # input size shuold be divisible by 8
img1_raw = cv2.resize(img1_raw, (img1_raw.shape[1]//8*8, img1_raw.shape[0]//8*8))

img0 = torch.from_numpy(img0_raw)[None][None].cuda() / 255.
img1 = torch.from_numpy(img1_raw)[None][None].cuda() / 255.
batch = {'image0': img0, 'image1': img1}

# Inference with LoFTR and get prediction
with torch.no_grad():
    matcher(batch)
    mkpts0 = batch['mkpts0_f'].cpu().numpy()
    mkpts1 = batch['mkpts1_f'].cpu().numpy()
    mconf = batch['mconf'].cpu().numpy()

# Draw
color = cm.jet(mconf)
text = [
    'LoFTR',
    'Matches: {}'.format(len(mkpts0)),
]
fig = make_matching_figure(img0_raw, img1_raw, mkpts0, mkpts1, color, text=text,path="LoFTR-master/LoFTR-colab-demo.pdf")

3)运行python mydemo.py得到结果:

  

标签:配准,Transformers,cv2,raw,LoFTR,import,img1,img0
来源: https://blog.csdn.net/qq_17783559/article/details/120599526

本站声明: 1. iCode9 技术分享网(下文简称本站)提供的所有内容,仅供技术学习、探讨和分享;
2. 关于本站的所有留言、评论、转载及引用,纯属内容发起人的个人观点,与本站观点和立场无关;
3. 关于本站的所有言论和文字,纯属内容发起人的个人观点,与本站观点和立场无关;
4. 本站文章均是网友提供,不完全保证技术分享内容的完整性、准确性、时效性、风险性和版权归属;如您发现该文章侵犯了您的权益,可联系我们第一时间进行删除;
5. 本站为非盈利性的个人网站,所有内容不会用来进行牟利,也不会利用任何形式的广告来间接获益,纯粹是为了广大技术爱好者提供技术内容和技术思想的分享性交流网站。

专注分享技术,共同学习,共同进步。侵权联系[81616952@qq.com]

Copyright (C)ICode9.com, All Rights Reserved.

ICode9版权所有