ICode9

精准搜索请尝试: 精确搜索
首页 > 其他分享> 文章详细

External-Attention-tensorflow(更新中)

2022-06-17 16:03:30  阅读:204  来源: 互联网

标签:input import Attention External output tf tensorflow 512


External-Attention-tensorflow

1. Residual Attention Usage

1.1. Paper

Residual Attention: A Simple but Effective Method for Multi-Label Recognition---ICCV2021

1.2 Overview

image

1.3. UsageCode

from attention.ResidualAttention import ResidualAttention
import tensorflow as tf

input = tf.random.normal(shape=(50, 7, 7, 512))
resatt = ResidualAttention(num_class=1000, la=0.2)
output = resatt(input)
print(output.shape)

2. External Attention Usage

2.1. Paper

"Beyond Self-attention: External Attention using Two Linear Layers for Visual Tasks"

2.2. Overview

image

2.3. UsageCode

from attention.ExternalAttention import ExternalAttention
import tensorflow as tf

input = tf.random.normal(shape=(50, 49, 512))
ea = ExternalAttention(d_model=512, S=8)
output = ea(input)
print(output.shape)

3. Self Attention Usage

3.1. Paper

"Attention Is All You Need"

3.2. Overview

image

3.3. UsageCode

from attention.SelfAttention import ScaledDotProductAttention
import tensorflow as tf

input = tf.random.normal((50, 49, 512))
sa = ScaledDotProductAttention(d_model=512, d_k=512, d_v=512, h=8)
output = sa(input, input, input)
print(output.shape)

4. Simplified Self Attention Usage

4.1. Paper

None

4.2. Overview

image

4.3. UsageCode

from attention.SimplifiedSelfAttention import SimplifiedScaledDotProductAttention
import tensorflow as tf

input = tf.random.normal((50, 49, 512))
ssa = SimplifiedScaledDotProductAttention(d_model=512, h=8)
output = ssa(input, input, input)
print(output.shape)

5. Squeeze-and-Excitation Attention Usage

5.1. Paper

"Squeeze-and-Excitation Networks"

5.2. Overview

image

5.3. UsageCode

from attention.SEAttention import SEAttention
import tensorflow as tf

input = tf.random.normal((50, 7, 7, 512))
se = SEAttention(channel=512, reduction=8)
output = se(input)
print(output.shape)

6. SK Attention Usage

6.1. Paper

"Selective Kernel Networks"

6.2. Overview

image

6.3. UsageCode

from attention.SKAttention import SKAttention
import tensorflow as tf

input = tf.random.normal((50, 7, 7, 512))
se = SKAttention(channel=512, reduction=8)
output = se(input)
print(output.shape)

7. CBAM Attention Usage

7.1. Paper

"CBAM: Convolutional Block Attention Module"

7.2. Overview

image
image

7.3. Usage Code

from attention.CBAM import CBAMBlock
import torch

input=torch.randn(50,512,7,7)
kernel_size=input.shape[2]
cbam = CBAMBlock(channel=512,reduction=16,kernel_size=kernel_size)
output=cbam(input)
print(output.shape)

标签:input,import,Attention,External,output,tf,tensorflow,512
来源: https://www.cnblogs.com/ccfco/p/16386026.html

本站声明: 1. iCode9 技术分享网(下文简称本站)提供的所有内容,仅供技术学习、探讨和分享;
2. 关于本站的所有留言、评论、转载及引用,纯属内容发起人的个人观点,与本站观点和立场无关;
3. 关于本站的所有言论和文字,纯属内容发起人的个人观点,与本站观点和立场无关;
4. 本站文章均是网友提供,不完全保证技术分享内容的完整性、准确性、时效性、风险性和版权归属;如您发现该文章侵犯了您的权益,可联系我们第一时间进行删除;
5. 本站为非盈利性的个人网站,所有内容不会用来进行牟利,也不会利用任何形式的广告来间接获益,纯粹是为了广大技术爱好者提供技术内容和技术思想的分享性交流网站。

专注分享技术,共同学习,共同进步。侵权联系[81616952@qq.com]

Copyright (C)ICode9.com, All Rights Reserved.

ICode9版权所有