ICode9

精准搜索请尝试: 精确搜索
首页 > 其他分享> 文章详细

PyTorch与张量简介

2021-12-19 08:32:24  阅读:201  来源: 互联网

标签:10 Python 简介 vectors 张量 PyTorch autograd tensors


https://github.com/pytorch/pytorch

PyTorch is a Python package that provides two high-level features:

  • Tensor [张量] computation (like NumPy) with strong GPU acceleration
  • Deep neural networks built on a tape-based autograd [自动梯度] system

[365datascience] Tensors have been around for nearly 200 years. In fact, the first use of the word 'tensor' was introduced by William Hamilton. Interestingly, the meaning of this word had little to do with what we call tensors from 1898 until today.

How did tensors become important you may ask? Well, not without the help of one of the biggest names in science – Albert Einstein! Einstein [爱因斯坦] developed and formulated the whole theory of 'general relativity' [相对论] entirely in the language of tensors. Having done that, Einstein, while not a big fan of tensors himself, popularized tensor calculus [微积分] to more than anyone else could ever have.

Nowadays, we can argue that the word ‘tensor’ is still a bit ‘underground’. You won’t hear it in high school. In fact, your Math teacher may have never heard of it. However, state-of-the-art [最时新的] machine learning frameworks are doubling down [下赌注?] on tensors. The most prominent [惹人注目的] example being Google’s TensorFlow.

A scalar [标量] has the lowest dimensionality and is always 1x1. It can be thought of as a vector of length 1, or a 1x1 matrix. 如int i;

It is followed by a vector, where each element of that vector is a scalar. The dimensions of a vector are nothing but Mx1 or 1xM matrices. 如int ary[10]; 1行10列。int* ary[10]可以存10行1列,也可以存10行M列 - 把int row[M]当int* 来用。

Then we have matrices [矩阵], which are nothing more than a collection of vectors. The dimensions of a matrix are MxN. In other words, a matrix is a collection of n vectors of dimensions m by 1. Or, m vectors of dimensions n by 1. Furthermore, since scalars make up vectors, you can also think of a matrix as a collection of scalars, too.

Now, a tensor is the most general concept. Scalars, vectors, and matrices are all tensors of ranks 0, 1, and 2, respectively. Tensors are simply a generalization of the concepts we have seen so far. 如float tensor[X][Y][Z][T]

 

What is a a tape-based autograd system? [stackoverflow] There are different types of automatic differentiation [求导数] e.g. forward-mode, reverse-mode, hybrids [混合型]. The tape-based autograd in Pytorch simply refers to the uses of reverse-mode automatic differentiation, source. The reverse-mode auto diff is simply a technique used to compute gradients efficiently and it happens to be used by backpropagation, source.

You can reuse your favorite Python packages such as NumPy, SciPy, and Cython to extend PyTorch when needed.

More About PyTorch

  • A GPU-Ready Tensor Library
  • Fast and Lean [又快又小]
  • Extensions Without Pain
  • Binaries NVIDIA Jetson Platforms [a series of embedded computing boards, 可以算而不显?]

CUDA is not supported on macOS.

包依赖 https://github.com/pytorch/pytorch#install-dependencies https://pypi.org

  • astunparse - provides code generation capability for native Python AST [抽象语法树] 和autograd有关
  • numpy - a general-purpose array-processing package. which includes Matplotlib, Seaborn, Plotly, Altair, Bokeh, Holoviz, Vispy, Napari, and PyVista
  • ninja - a small build system with a focus on speed
  • pyyaml - YAML is a data serialization format designed for human readability and interaction with scripting languages. PyYAML is a YAML parser and emitter for Python.
  • mkl - Intel® oneAPI Math Kernel Library
  • setuptools
  • cmake
  • cffi - Foreign Function Interface for Python calling C code
  • typing_extensions - Typing defines a standard notation for Python function and variable type annotations. 也应和autograd有关
  • future - the missing compatibility layer between Python 2 and Python 3,不是异步IO里的future
  • six - Python 2 and 3 compatibility library
  • requests - Requests is a simple, yet elegant, HTTP library
  • dataclasses - https://www.python.org/dev/peps/pep-0557/ @dataclass decorator adds __eq__, __gt__ (greater than)... 似乎PyTorch的autograd靠的是自定义类和运算符重载,而不是sympy那样的符号运算。还是sympy好玩:
from sympy import * # pip install sympy
x, y, z = symbols('x y z')
print(simplify(sin(x)**2 + cos(x)**2))
print(expand((x+1) ** 3))
print(factor(x**3 + 1))
print(factor(x**33 + 1))
1
'''
x**3 + 3*x**2 + 3*x + 1
(x + 1)*(x**2 - x + 1)
(x + 1)*(x**2 - x + 1)*(x**10 - x**9 + x**8 - x**7 + x**6 - x**5 + x**4 - x**3 + x**2 - x + 1)*(x**20 + x**19 - x**17 - x**16 + x**14 + x**13 - x**11 - x**10 - x**9 + x**7 + x**6 - x**4 - x**3 + x + 1)
https://blog.csdn.net/shuangguo121/article/details/86611948
合并同类项对应的函数为collect; 分式化简函数的名称是cancel; 分式裂项函数的名称是apart,它能
将一个分式分解为几个分式的和、差; 由三角函数组成的表达式,可以使用trigsimp函数来化简; 要
展开三角函数,可以使用expand_trig函数; 若表达式中存在指数可以化解的情况,可以使用powsimp函数。
指数展开对应的函数为expand_power_exp,基底展开对应的函数为expand_power_base;

Imperative Experiences

PyTorch is designed to be intuitive [直觉], linear in thought, and easy to use. When you execute a line of code, it gets executed. There isn't an asynchronous view of the world. When you drop into a debugger or receive error messages and stack traces, understanding them is straightforward. 就像是阻塞式while(recv)puts和异步I/O, event loop之间的差别,还有JavaScript的AJAX里套AJAX. Imperative: 祈使/命令语气: "干!" 事情就干了,不是不知道啥时候干完。并发 并行 进程 线程 协程 异步I/O python async - Fun_with_Words - 博客园 (cnblogs.com)

标签:10,Python,简介,vectors,张量,PyTorch,autograd,tensors
来源: https://www.cnblogs.com/funwithwords/p/15706651.html

本站声明: 1. iCode9 技术分享网(下文简称本站)提供的所有内容,仅供技术学习、探讨和分享;
2. 关于本站的所有留言、评论、转载及引用,纯属内容发起人的个人观点,与本站观点和立场无关;
3. 关于本站的所有言论和文字,纯属内容发起人的个人观点,与本站观点和立场无关;
4. 本站文章均是网友提供,不完全保证技术分享内容的完整性、准确性、时效性、风险性和版权归属;如您发现该文章侵犯了您的权益,可联系我们第一时间进行删除;
5. 本站为非盈利性的个人网站,所有内容不会用来进行牟利,也不会利用任何形式的广告来间接获益,纯粹是为了广大技术爱好者提供技术内容和技术思想的分享性交流网站。

专注分享技术,共同学习,共同进步。侵权联系[81616952@qq.com]

Copyright (C)ICode9.com, All Rights Reserved.

ICode9版权所有