标签:Image adversarial Adversarial paired input images output Consistent
Abstract
Our goal is to learn a mapping G : X → Y such that the distribution of images from G(X) is indistinguishable from the distribution Y using an adversarial loss. Because this mapping is highly under-constrained, we couple it with an inverse mapping F : Y → X and introduce a cycle consistency loss to enforce F(G(X)) ≈ X (and vice versa).
1. Introduction
本文做了什么:
In this paper, we present a method that can learn to do the same: capturing special characteristics of one image collection and figuring out how these characteristics could be translated into the other image collection, all in the absence of any paired training examples.
- 包括obtaining paired training data的缺点,本文算法优点We therefore seek an algorithm that can learn to translate between domains without paired input-output examples
- without paired的方法缺点:在The optimal G thereby translates the domain X to a domain Yˆ distributed identically to Y时
(1)such a translation does not guarantee that an individual input x and output y are paired up in a meaningful way
(2)it difficult to optimize the adversarial objective in isolation
- 解决方法: adding more structure
if we have a translator G : X → Y and another translator F : Y → X, then G and F should be inverses of each other, and both mappings should be bijections ;adding a cycle consistency loss [64] that encourages F(G(x)) ≈ x and G(F(y)) ≈ y
2. Related work
(1)GAN We adopt an adversarial loss to learn the mapping such that the translated images cannot be distinguished from images in the target domain.
(2)Image-to-Image Translation Our approach builds on the “pix2pix” framework of Isola et al. [22], which uses a conditional generative adversarial network [16] to learn a mapping from input to output images.本文与前面的方法不同之处we learn the mapping without paired training examples.(提了好几遍)
(3)Unpaired Image-to-Image Translation 之前也有一些方法也是 unpaired setting。而且这些方法 also use adversarial networks, with additional terms to enforce the output to be close to the input in a predefined metric space。但是本文的方法our formulation does not rely on any task-specific(特定的), predefined similarity function(预定义相似函数) between the input and output, nor do we assume that the input and output have to lie in the same low-dimensional embedding space.
(4)Cycle Consistency . In this work, we are introducing a similar loss to push G and F to be consistent with each other
(5)Neural Style Transfer
3. Formulation
In addition, we introduce two adversarial discriminators DX and DY , where DX aims to distinguish between images {x} and translated images {F(y)}; in the same way, DY aims to discriminate between {y} and {G(x)}. Our objective contains two types of terms: adversarial losses [16] for matching the distribution of generated images to the data distribution in the target domain; and cycle consistency losses to prevent the learned mappings G and F from contradicting each other.
3.1. Adversarial Loss
3.2. Cycle Consistency Loss
adversarial losses alone cannot guarantee that the learned function can map an individual input xi to a desired output yi .
3.3. Full Objective
标签:Image,adversarial,Adversarial,paired,input,images,output,Consistent 来源: https://www.cnblogs.com/dyr-bangbangda/p/15362350.html
本站声明: 1. iCode9 技术分享网(下文简称本站)提供的所有内容,仅供技术学习、探讨和分享; 2. 关于本站的所有留言、评论、转载及引用,纯属内容发起人的个人观点,与本站观点和立场无关; 3. 关于本站的所有言论和文字,纯属内容发起人的个人观点,与本站观点和立场无关; 4. 本站文章均是网友提供,不完全保证技术分享内容的完整性、准确性、时效性、风险性和版权归属;如您发现该文章侵犯了您的权益,可联系我们第一时间进行删除; 5. 本站为非盈利性的个人网站,所有内容不会用来进行牟利,也不会利用任何形式的广告来间接获益,纯粹是为了广大技术爱好者提供技术内容和技术思想的分享性交流网站。