ICode9

精准搜索请尝试: 精确搜索
首页 > 其他分享> 文章详细

README - DeepDeblur-PyTorch

2021-11-11 09:04:40  阅读:269  来源: 互联网

标签:-- py dataset python PyTorch DeepDeblur README GPUs main


DeepDeblur-PyTorch

This is a pytorch implementation of our research. Please refer to our CVPR 2017 paper for details:

Deep Multi-scale Convolutional Neural Network for Dynamic Scene Deblurring
[paper]
[supplementary]
[slide]

If you find our work useful in your research or publication, please cite our work:

@InProceedings{Nah_2017_CVPR,
  author = {Nah, Seungjun and Kim, Tae Hyun and Lee, Kyoung Mu},
  title = {Deep Multi-Scale Convolutional Neural Network for Dynamic Scene Deblurring},
  booktitle = {The IEEE Conference on Computer Vision and Pattern Recognition (CVPR)},
  month = {July},
  year = {2017}
}

Original Torch7 implementaion is available here.

Dependencies

  • python 3 (tested with anaconda3)
  • PyTorch 1.6
  • tqdm
  • imageio
  • scikit-image
  • numpy
  • matplotlib
  • readline

Please refer to this issue for the versions.

Datasets

Usage examples

  • Preparing dataset

Before running the code, put the datasets on a desired directory. By default, the data root is set as ‘~/Research/dataset’
See: src/option.py

group_data.add_argument('--data_root', type=str, default='~/Research/dataset', help='dataset root location')

Put your dataset under args.data_root.

The dataset location should be like:

# GOPRO_Large dataset
~/Research/dataset/GOPRO_Large/train/GOPR0372_07_00/blur_gamma/....
# REDS dataset
~/Research/dataset/REDS/train/train_blur/000/...
  • Example commands
# single GPU training
python main.py --n_GPUs 1 --batch_size 8 # save the results in default experiment/YYYY-MM-DD_hh-mm-ss
python main.py --n_GPUs 1 --batch_size 8 --save_dir GOPRO_L1  # save the results in experiment/GOPRO_L1

# adversarial training
python main.py --n_GPUs 1 --batch_size 8 --loss 1*L1+1*ADV
python main.py --n_GPUs 1 --batch_size 8 --loss 1*L1+3*ADV
python main.py --n_GPUs 1 --batch_size 8 --loss 1*L1+0.1*ADV

# train with GOPRO_Large dataset
python main.py --n_GPUs 1 --batch_size 8 --dataset GOPRO_Large
# train with REDS dataset (always set --do_test false)
python main.py --n_GPUs 1 --batch_size 8 --dataset REDS --do_test false --milestones 100 150 180 --end_epoch 200

# save part of the evaluation results (default)
python main.py --n_GPUs 1 --batch_size 8 --dataset GOPRO_Large --save_results part
# save no evaluation results (faster at test time)
python main.py --n_GPUs 1 --batch_size 8 --dataset GOPRO_Large --save_results none
# save all of the evaluation results
python main.py --n_GPUs 1 --batch_size 8 --dataset GOPRO_Large --save_results all
# multi-GPU training (DataParallel)
python main.py --n_GPUs 2 --batch_size 16
# multi-GPU training (DistributedDataParallel), recommended for the best speed
# single command version (do not set ranks)
python launch.py --n_GPUs 2 main.py --batch_size 16

# multi-command version (type in independent shells with the corresponding ranks, useful for debugging)
python main.py --batch_size 16 --distributed true --n_GPUs 2 --rank 0 # shell 0
python main.py --batch_size 16 --distributed true --n_GPUs 2 --rank 1 # shell 1
# single precision inference (default)
python launch.py --n_GPUs 2 main.py --batch_size 16 --precision single

# half precision inference (faster and requires less memory)
python launch.py --n_GPUs 2 main.py --batch_size 16 --precision half

# half precision inference with AMP
python launch.py --n_GPUs 2 main.py --batch_size 16 --amp true
# optional mixed-precision training
# mixed precision training may result in different accuracy
python main.py --n_GPUs 1 --batch_size 16 --amp true
python main.py --n_GPUs 2 --batch_size 16 --amp true
python launch.py --n_GPUs 2 main.py --batch_size 16 --amp true
# Advanced usage examples 
# using launch.py is recommended for the best speed and convenience
python launch.py --n_GPUs 4 main.py --dataset GOPRO_Large
python launch.py --n_GPUs 4 main.py --dataset GOPRO_Large --milestones 500 750 900 --end_epoch 1000 --save_results none
python launch.py --n_GPUs 4 main.py --dataset GOPRO_Large --milestones 500 750 900 --end_epoch 1000 --save_results part
python launch.py --n_GPUs 4 main.py --dataset GOPRO_Large --milestones 500 750 900 --end_epoch 1000 --save_results all
python launch.py --n_GPUs 4 main.py --dataset GOPRO_Large --milestones 500 750 900 --end_epoch 1000 --save_results all --amp true

python launch.py --n_GPUs 4 main.py --dataset REDS --milestones 100 150 180 --end_epoch 200 --save_results all --do_test false
python launch.py --n_GPUs 4 main.py --dataset REDS --milestones 100 150 180 --end_epoch 200 --save_results all --do_test false --do_validate false
# Commands used to generate the below results
python launch.py --n_GPUs 2 main.py --dataset GOPRO_Large --milestones 500 750 900 --end_epoch 1000
python launch.py --n_GPUs 4 main.py --dataset REDS --milestones 100 150 180 --end_epoch 200 --do_test false

For more advanced usage, please take a look at src/option.py

Results

  • Single-precision training results
DatasetGOPRO_LargeREDS
PSNR30.4032.89
SSIM0.90180.9207
Downloadlinklink
  • Mixed-precision training results
DatasetGOPRO_LargeREDSREDS (GOPRO_Large pretrained)
PSNR30.4232.9533.13
SSIM0.90210.92090.9237
Downloadlinklinklink

Mixed-precision training uses less memory and is faster, especially on NVIDIA Turing-generation GPUs.
Loss scaling technique is adopted to cope with the narrow representation range of fp16.
This could improve/degrade accuracy.

  • Inference speed on RTX 2080 Ti (resolution: 1280x720)

Inference in half precision has negligible effect on accuracy while it requires less memory and computation time.

typeFP32FP16
fps1.063.03
time (s)0.9430.330

Demo

To use the trained models, download files, unzip, and put them under DeepDeblur-PyTorch/experiment

python main.py --save_dir SAVE_DIR --demo true --demo_input_dir INPUT_DIR_NAME --demo_output_dir OUTPUT_DIR_NAME
# SAVE_DIR is the experiment directory where the parameters are saved (GOPRO_L1, REDS_L1)
# SAVE_DIR is relative to DeepDeblur-PyTorch/experiment
# demo_output_dir is by default SAVE_DIR/results
# image dataloader looks into DEMO_INPUT_DIR, recursively

# example
# single GPU (GOPRO_Large, single precision)
python main.py --save_dir GOPRO_L1 --demo true --demo_input_dir ~/Research/dataset/GOPRO_Large/test/GOPR0384_11_00/blur_gamma
# single GPU (GOPRO_Large, amp-trained model, half precision)
python main.py --save_dir GOPRO_L1_amp --demo true --demo_input_dir ~/Research/dataset/GOPRO_Large/test/GOPR0384_11_00/blur_gamma --precision half
# multi-GPU (REDS, single precision)
python launch.py --n_GPUs 2 main.py --save_dir REDS_L1 --demo true --demo_input_dir ~/Research/dataset/REDS/test/test_blur --demo_output_dir OUTPUT_DIR_NAME
# multi-GPU (REDS, half precision)
python launch.py --n_GPUs 2 main.py --save_dir REDS_L1 --demo true --demo_input_dir ~/Research/dataset/REDS/test/test_blur --demo_output_dir OUTPUT_DIR_NAME --precision half

Differences from the original code

The default options are different from the original paper.

  • RGB range is [0, 255]
  • L1 loss (without adversarial loss. Usage possible. See above examples)
  • Batch size increased to 16.
  • Distributed multi-gpu training is recommended.
  • Mixed-precision training enabled. Accuracy not guaranteed.
  • SSIM function changed from MATLAB to python

SSIM issue

There are many different SSIM implementations.
In this repository, SSIM metric is based on the following function:

from skimage.metrics import structural_similarity
ssim = structural_similarity(ref_im, res_im, multichannel=True, gaussian_weights=True, use_sample_covariance=False)

SSIM class in src/loss/metric.py supports PyTorch.
SSIM function in MATLAB is not correct if applied to RGB images. See this issue for details.

标签:--,py,dataset,python,PyTorch,DeepDeblur,README,GPUs,main
来源: https://blog.csdn.net/Lstar_/article/details/121260883

本站声明: 1. iCode9 技术分享网(下文简称本站)提供的所有内容,仅供技术学习、探讨和分享;
2. 关于本站的所有留言、评论、转载及引用,纯属内容发起人的个人观点,与本站观点和立场无关;
3. 关于本站的所有言论和文字,纯属内容发起人的个人观点,与本站观点和立场无关;
4. 本站文章均是网友提供,不完全保证技术分享内容的完整性、准确性、时效性、风险性和版权归属;如您发现该文章侵犯了您的权益,可联系我们第一时间进行删除;
5. 本站为非盈利性的个人网站,所有内容不会用来进行牟利,也不会利用任何形式的广告来间接获益,纯粹是为了广大技术爱好者提供技术内容和技术思想的分享性交流网站。

专注分享技术,共同学习,共同进步。侵权联系[81616952@qq.com]

Copyright (C)ICode9.com, All Rights Reserved.

ICode9版权所有