ICode9

精准搜索请尝试: 精确搜索
首页 > 其他分享> 文章详细

Nine---pytorch学习---拼接与拆分/运算统计

2022-08-13 19:31:59  阅读:198  来源: 互联网

标签:tensor 32 torch shape --- pytorch Nine print Size


## pytorch学习(6)

### 拼接与拆分

- cat
- stack
- split
- chunk

#### cat()

- 连接给定维度中给定的张量序列
- 所有张量必须具有相同的形状(拼接维度除外)或为空
- torch.cat() 可以看作是 torch.split() 和 torch.chunk() 的反运算
- torch.cat(inputs,dim=)

```python
#正确的案例
import torch

a = torch.rand(3,32,8)
b = torch.rand(6,32,8) #b与a除拼接维度外具有相同的形状
c = torch.cat([a,b], dim=0)
print(a.shape)
print(b.shape)
print(c.shape)
-------------------------------------------------------
torch.Size([3, 32, 8])
torch.Size([6, 32, 8])
torch.Size([9, 32, 8])
```

```python
#错误的案例
import torch

a = torch.rand(3,32,8)
b = torch.rand(6,32,8) #在dim=1时 第0维3与6并不相同不符合
c = torch.cat([a,b], dim=1)
print(a.shape)
print(b.shape)
print(c.shape)
-------------------------------------------------------
RuntimeError: Sizes of tensors must match except in dimension 1. Expected size 3 but got size 6 for tensor number 1 in the list.
```

#### stack()

- create new dim
- 沿着一个新维度对输入张量进行连接,属于扩张再拼接的函数
- 序列中所有张量都要有相同的形状
- torch.stack(sequence,dim=)

```python
import torch

a = torch.rand(6,32,8)
b = torch.rand(6,32,8) #a与b序列应保持相同
c = torch.stack([a,b], dim=0)
print(a.shape)
print(b.shape)
print(c.shape)
-------------------------------------------------------
torch.Size([6, 32, 8])
torch.Size([6, 32, 8])
torch.Size([2, 6, 32, 8])
```

#### split()

- by len 根据块内数据长度拆分
- 将输入张量分割成相等形状的块结构
- 如果沿指定维的张量形状大小不能被 split_size 整分,则最后一个分块会小于其他分块
- torch.split(tensor,split_size,dim)

```python
#案例一
import torch

a = torch.rand(6,32,8)
b = torch.split(a,3,0)
print(a.shape)
print(len(b))
print(b[0].shape)
print(b[1].shape)
-------------------------------------------------------
torch.Size([6, 32, 8])
2
torch.Size([3, 32, 8])
torch.Size([3, 32, 8])
```

```python
#案例二
import torch

a = torch.rand(7,32,8)
b = torch.split(a,3,0)
print(a.shape)
print(len(b))
print(b[0].shape)
print(b[1].shape)
print(b[2].shape)
-------------------------------------------------------
torch.Size([7, 32, 8])
3
torch.Size([3, 32, 8])
torch.Size([3, 32, 8])
torch.Size([1, 32, 8])
```

#### chunk()

- by num 根据块个数来拆分
- 将 tensor 在指定维度dim上进行分块(个数chunks)
- 如果沿指定维的张量形状大小不能被 chunks 整分,则最后一个分块会小于其它分块
- torch.chunk(tensor,chunks,dim)

```python
import torch

a = torch.rand(6,32,8)
b = torch.chunk(a,3,0)
print(a.shape)
print(len(b))
print(b[0].shape)
print(b[1].shape)
print(b[2].shape)
-------------------------------------------------------
torch.Size([6, 32, 8])
3
torch.Size([2, 32, 8])
torch.Size([2, 32, 8])
torch.Size([2, 32, 8])
```

### 运算与统计

- 基础四则运算
- 平方与开方
- 矩阵相乘
- 近似函数
- 数据裁剪函数

#### 基础四则运算(需要满足广播机制)

- add 加法
- sub 减法
- mul 乘法
- div 除法

##### torch.add()

```python
import torch

a = torch.rand(3,4)
b = torch.rand(4)

c = a+b
d = torch.add(a,b) #两种方式输出结果相同

print(c)
print(d)
-------------------------------------------------------
tensor([[1.3796, 0.4727, 1.7063, 1.4197],
[0.5088, 0.2003, 1.6186, 0.9607],
[0.7644, 0.2571, 1.4409, 0.9180]])
tensor([[1.3796, 0.4727, 1.7063, 1.4197],
[0.5088, 0.2003, 1.6186, 0.9607],
[0.7644, 0.2571, 1.4409, 0.9180]])
```

##### torch.sub()

```python
import torch

a = torch.rand(3,4)
b = torch.rand(4)

c = a - b
d = torch.sub(a,b)

print(c)
print(d)
-------------------------------------------------------
tensor([[ 0.3418, -0.0872, -0.4209, -0.1290],
[ 0.3348, -0.0099, -0.2430, -0.3949],
[-0.1601, -0.1792, 0.1060, -0.5435]])
tensor([[ 0.3418, -0.0872, -0.4209, -0.1290],
[ 0.3348, -0.0099, -0.2430, -0.3949],
[-0.1601, -0.1792, 0.1060, -0.5435]])
```

torch.mul(input, value, out=None)

- 对输入张量 input 逐元素乘以 标量值/张量(value),并返回一个新的张量tensor

```python
import torch

a = torch.rand(3,3)
b = torch.eye(3,3)

c = torch.mul(a,b)

print(a)
print(b)
print(c)
-------------------------------------------------------
tensor([[0.2426, 0.2934, 0.9999],
[0.3949, 0.5847, 0.8023],
[0.7302, 0.4891, 0.8976]])
tensor([[1., 0., 0.],
[0., 1., 0.],
[0., 0., 1.]])
tensor([[0.2426, 0.0000, 0.0000],
[0.0000, 0.5847, 0.0000],
[0.0000, 0.0000, 0.8976]])
```

##### torch.div() & /

```python
import torch

a = torch.full([2,2],2) #创建一个二行二列的tensor并填充2
b = 2
c = torch.div(a,b)
d = a / b #div()效果与/一样

print(a)
print(c)
print(d)
-------------------------------------------------------
tensor([[2, 2],
[2, 2]])
tensor([[1., 1.],
[1., 1.]])
tensor([[1., 1.],
[1., 1.]])
```

#### 平方与开方

- pow 函数 & **
- sqrt 函数(平方) & rsqrt 函数

##### pow 函数 & **

```python
import torch

a = torch.full([2,2],2)
b = a.pow(2)
c = a**2
d = b.pow(0.5)

print(b)
print(c)
print(d)
-------------------------------------------------------
tensor([[4, 4],
[4, 4]])
tensor([[4, 4],
[4, 4]])
tensor([[2., 2.],
[2., 2.]])
```

##### sqrt 函数(平方) & rsqrt 函数

- torch.sqrt(input, out=None) 返回一个新张量,包含输入input张量每个元素的平方根
- torch.rsqrt(input, out=None) 返回一个新张量,包含输入input张量每个元素的平方根倒数

```python
import torch

a = torch.full([2,2],2)
b = torch.sqrt(a)
c = torch.rsqrt(a)

print(a)
print(b)
print(c)
-------------------------------------------------------
tensor([[2, 2],
[2, 2]])
tensor([[1.4142, 1.4142],
[1.4142, 1.4142]])
tensor([[0.7071, 0.7071],
[0.7071, 0.7071]])
```

#### 矩阵相乘

- matmul()
- torch.mm(mat1,mat2,out=None)
- torch.bmm(batch1,batch2,out=None)
- torch.matmul(tensor1,tensor2,out=None)

##### torch.mm() --- 二维矩阵乘法

- mm只能进行矩阵乘法,也就是输入的两个tensor维度只能是 (n x m)和 (m x p)
- (n x m)和(m x p)通过矩阵乘法得到(n x p)

```python
import torch

a = torch.rand(3,4)
b = torch.rand(4,5)

c = torch.mm(a,b)

print(a.shape)
print(b.shape)
print(c.shape)
-------------------------------------------------------
torch.Size([3, 4])
torch.Size([4, 5])
torch.Size([3, 5])
```

##### torch.bmm() --- 三维带batch的矩阵乘法

- bmm是两个三维张量相乘,两个输入tensor维度是 (b x n x m) 和 (b x m x p)
- 第一维是batch维度,输出是 (b x n x p)

```python
import torch

a = torch.rand(3,2,4)
b = torch.rand(3,4,5)

c = torch.bmm(a,b)

print(a.shape)
print(b.shape)
print(c.shape)
-------------------------------------------------------
torch.Size([3, 2, 4])
torch.Size([3, 4, 5])
torch.Size([3, 2, 5])
```

##### torch.matmul()

- matmul可以进行张量乘法,输入可以是高维
- 对矩阵mat1和mat2进行相乘
- @符号与matmul效果相同
- 例如:tensor1维度是(j x 1 x n x m),tensor2维度是(k x m x p),输出为(j x k x n x p)

```python
import torch

a = torch.rand(3,1,2,4)
b = torch.rand(5,4,6)

c = torch.matmul(a,b)
d = a @ b
print(a.shape)
print(b.shape)
print(c.shape)
print(d.shape)
-------------------------------------------------------
torch.Size([3, 1, 2, 4])
torch.Size([5, 4, 6])
torch.Size([3, 5, 2, 6])
torch.Size([3, 5, 2, 6])
```

#### 近似函数

- .floor() 向下取整
- .ceil() 向上取整
- .round() 四舍五入
- .trunc() 裁剪整数部分
- .frac() 裁剪小数部分

```python
import torch

a = torch.tensor(3.1415926)

b = a.floor()
c = a.ceil()
d = a.round()
e = a.trunc()
f = a.frac()

print(a)
print(b)
print(c)
print(d)
print(e)
print(f)
-------------------------------------------------------
tensor(3.1416)
tensor(3.)
tensor(4.)
tensor(3.)
tensor(3.)
tensor(0.1416)
```

#### 数据裁剪函数

- clamp函数 将输入input张量每个元素值约束到区间[min,max],并返回结果到一个新的tensor,也可以只设定min或只设定max
- torch.clamp(input,min,max)

```python
import torch

a = torch.rand(3,3)*20
b = torch.clamp(a,0,10)
c = torch.clamp(a,7,14)

print(a)
print(b)
print(c)
-------------------------------------------------------
tensor([[ 1.8016, 14.4178, 10.5252],
[ 8.3026, 0.1275, 19.7785],
[ 2.6293, 13.7800, 18.9552]])
tensor([[ 1.8016, 10.0000, 10.0000],
[ 8.3026, 0.1275, 10.0000],
[ 2.6293, 10.0000, 10.0000]])
tensor([[ 7.0000, 14.0000, 10.5252],
[ 8.3026, 7.0000, 14.0000],
[ 7.0000, 13.7800, 14.0000]])
```

 

标签:tensor,32,torch,shape,---,pytorch,Nine,print,Size
来源: https://www.cnblogs.com/311dih/p/16583856.html

本站声明: 1. iCode9 技术分享网(下文简称本站)提供的所有内容,仅供技术学习、探讨和分享;
2. 关于本站的所有留言、评论、转载及引用,纯属内容发起人的个人观点,与本站观点和立场无关;
3. 关于本站的所有言论和文字,纯属内容发起人的个人观点,与本站观点和立场无关;
4. 本站文章均是网友提供,不完全保证技术分享内容的完整性、准确性、时效性、风险性和版权归属;如您发现该文章侵犯了您的权益,可联系我们第一时间进行删除;
5. 本站为非盈利性的个人网站,所有内容不会用来进行牟利,也不会利用任何形式的广告来间接获益,纯粹是为了广大技术爱好者提供技术内容和技术思想的分享性交流网站。

专注分享技术,共同学习,共同进步。侵权联系[81616952@qq.com]

Copyright (C)ICode9.com, All Rights Reserved.

ICode9版权所有