This commit is contained in:
2025-12-16 09:23:53 +08:00
parent 19138d3cc1
commit 9e7efd0626
409 changed files with 272713 additions and 241 deletions
+1 -214
View File
@@ -5,10 +5,6 @@
# Trae AI files # Trae AI files
.trae/ .trae/
# Virtual environment
venv/
.venv/
# Python cache # Python cache
__pycache__/ __pycache__/
*.pyc *.pyc
@@ -30,213 +26,4 @@ Thumbs.db
# Build directories # Build directories
build/ build/
dist/ dist/
*.egg-info/ *.egg-info/
# Standard Python .gitignore content
# Byte-compiled / optimized / DLL files
__pycache__/
*.py[codz]
*$py.class
# C extensions
*.so
# Distribution / packaging
.Python
build/
develop-eggs/
dist/
downloads/
eggs/
.eggs/
lib/
lib64/
parts/
sdist/
var/
wheels/
share/python-wheels/
*.egg-info/
.installed.cfg
*.egg
MANIFEST
# PyInstaller
# Usually these files are written by a python script from a template
# before PyInstaller builds the exe, so as to inject date/other infos into it.
*.manifest
*.spec
# Installer logs
pip-log.txt
pip-delete-this-directory.txt
# Unit test / coverage reports
htmlcov/
.tox/
.nox/
.coverage
.coverage.*
.cache
nosetests.xml
coverage.xml
*.cover
*.py.cover
.hypothesis/
.pytest_cache/
cover/
# Translations
*.mo
*.pot
# Django stuff:
*.log
local_settings.py
db.sqlite3
db.sqlite3-journal
# Flask stuff:
instance/
.webassets-cache
# Scrapy stuff:
.scrapy
# Sphinx documentation
docs/_build/
# PyBuilder
.pybuilder/
target/
# Jupyter Notebook
.ipynb_checkpoints
# IPython
profile_default/
ipython_config.py
# pyenv
# For a library or package, you might want to ignore these files since the code is
# intended to run in multiple environments; otherwise, check them in:
# .python-version
# pipenv
# According to pypa/pipenv#598, it is recommended to include Pipfile.lock in version control.
# However, in case of collaboration, if having platform-specific dependencies or dependencies
# having no cross-platform support, pipenv may install dependencies that don't work, or not
# install all needed dependencies.
#Pipfile.lock
# UV
# Similar to Pipfile.lock, it is generally recommended to include uv.lock in version control.
# This is especially recommended for binary packages to ensure reproducibility, and is more
# commonly ignored for libraries.
#uv.lock
# poetry
# Similar to Pipfile.lock, it is generally recommended to include poetry.lock in version control.
# This is especially recommended for binary packages to ensure reproducibility, and is more
# commonly ignored for libraries.
# https://python-poetry.org/docs/basic-usage/#commit-your-poetrylock-file-to-version-control
#poetry.lock
#poetry.toml
# pdm
# Similar to Pipfile.lock, it is generally recommended to include pdm.lock in version control.
# pdm recommends including project-wide configuration in pdm.toml, but excluding .pdm-python.
# https://pdm-project.org/en/latest/usage/project/#working-with-version-control
#pdm.lock
#pdm.toml
.pdm-python
.pdm-build/
# pixi
# Similar to Pipfile.lock, it is generally recommended to include pixi.lock in version control.
#pixi.lock
# Pixi creates a virtual environment in the .pixi directory, just like venv module creates one
# in the .venv directory. It is recommended not to include this directory in version control.
.pixi
# PEP 582; used by e.g. github.com/David-OConnor/pyflow and github.com/pdm-project/pdm
__pypackages__/
# Celery stuff
celerybeat-schedule
celerybeat.pid
# SageMath parsed files
*.sage.py
# Environments
.env
.envrc
.venv
env/
venv/
ENV/
env.bak/
venv.bak/
# Spyder project settings
.spyderproject
.spyproject
# Rope project settings
.ropeproject
# mkdocs documentation
/site
# mypy
.mypy_cache/
.dmypy.json
dmypy.json
# Pyre type checker
.pyre/
# pytype static type analyzer
.pytype/
# Cython debug symbols
cython_debug/
# PyCharm
# JetBrains specific template is maintained in a separate JetBrains.gitignore that can
# be found at https://github.com/github/gitignore/blob/main/Global/JetBrains.gitignore
# and can be added to the global gitignore or merged into this file. For a more nuclear
# option (not recommended) you can uncomment the following to ignore the entire idea folder.
#.idea/
# Abstra
# Abstra is an AI-powered process automation framework.
# Ignore directories containing user credentials, local state, and settings.
# Learn more at https://abstra.io/docs
.abstra/
# Visual Studio Code
# Visual Studio Code specific template is maintained in a separate VisualStudioCode.gitignore
# that can be found at https://github.com/github/gitignore/blob/main/Global/VisualStudioCode.gitignore
# and can be added to the global gitignore or merged into this file. However, if you prefer,
# you could uncomment the following to ignore the entire vscode folder
# .vscode/
# Ruff stuff:
.ruff_cache/
# PyPI configuration file
.pypirc
# Cursor
# Cursor is an AI-powered code editor. `.cursorignore` specifies files/directories to
# exclude from AI features like autocomplete and code analysis. Recommended for sensitive data
# refer to https://docs.cursor.com/context/ignore-files
.cursorignore
.cursorindexingignore
# Marimo
marimo/_static/
marimo/_lsp/
__marimo__/
+298
View File
@@ -0,0 +1,298 @@
{
"cells": [
{
"cell_type": "markdown",
"id": "8e7fb728",
"metadata": {
"origin_pos": 0
},
"source": [
"## 英汉术语对照\n",
"\n",
"鞍点,saddle point\n",
"\n",
"变换,transform\n",
"\n",
"编码器,encoder\n",
"\n",
"标签,label\n",
"\n",
"步幅,stride\n",
"\n",
"参数,parameter\n",
"\n",
"长短期记忆网络,long short-term memory (LSTM)\n",
"\n",
"超参数,hyperparameter\n",
"\n",
"层序softmaxhierarchical softmax\n",
"\n",
"查准率,precision\n",
"\n",
"成本,cost\n",
"\n",
"词表,vocabulary\n",
"\n",
"词嵌入,word embedding\n",
"\n",
"词向量,word vector\n",
"\n",
"词元,token\n",
"\n",
"词元分析器,tokenizer\n",
"\n",
"词元化,tokenize\n",
"\n",
"汇聚层,pooling layer\n",
"\n",
"稠密,dense\n",
"\n",
"大小,size\n",
"\n",
"导入,import\n",
"\n",
"轮,epoch\n",
"\n",
"暂退法,dropout\n",
"\n",
"动量法,momentum (method)\n",
"\n",
"独立同分布,independent and identically distributed (i.i.d.)\n",
"\n",
"端到端,end-to-end\n",
"\n",
"多层感知机,multilayer perceptron\n",
"\n",
"多头注意力,multi-head attention\n",
"\n",
"二元分类,binary classification\n",
"\n",
"二元,bigram\n",
"\n",
"子采样,subsample\n",
"\n",
"发散,diverge\n",
"\n",
"泛化,generalization\n",
"\n",
"泛化误差,generalization error\n",
"\n",
"方差,variance\n",
"\n",
"分类,classification\n",
"\n",
"分类器,classifier\n",
"\n",
"负采样,negative sampling\n",
"\n",
"感受野,receptive field\n",
"\n",
"格拉姆矩阵,Gram matrix\n",
"\n",
"共现,co-occurrence\n",
"\n",
"广播,broadcast\n",
"\n",
"规范化,normalization\n",
"\n",
"过拟合,overfitting\n",
"\n",
"核回归,kernel regression\n",
"\n",
"恒等映射,identity mapping\n",
"\n",
"假设,hypothesis\n",
"\n",
"基准,baseline\n",
"\n",
"激活函数,activation function\n",
"\n",
"解码器,decoder\n",
"\n",
"近似法,approximate method\n",
"\n",
"经验风险最小化,empirical risk minimization\n",
"\n",
"局部最小值,local minimum\n",
"\n",
"卷积核,convolutional kernel\n",
"\n",
"卷积神经网络,convolutional neural network\n",
"\n",
"决策边界,decision boundary\n",
"\n",
"均值,mean\n",
"\n",
"均方误差,mean squared error\n",
"\n",
"均匀采样,uniform sampling\n",
"\n",
"块,block\n",
"\n",
"困惑度,perplexity\n",
"\n",
"拉普拉斯平滑,Laplace smoothing\n",
"\n",
"连结,concatenate\n",
"\n",
"类,class\n",
"\n",
"交叉熵,cross-entropy\n",
"\n",
"连续词袋,continous bag-of-words (CBOW)\n",
"\n",
"零张量,zero tensor\n",
"\n",
"流水线,pipeline\n",
"\n",
"滤波器,filter\n",
"\n",
"门控循环单元,gated recurrent units (GRU)\n",
"\n",
"目标检测,object detection\n",
"\n",
"偏置,bias\n",
"\n",
"偏导数,partial derivative\n",
"\n",
"偏移量,offset\n",
"\n",
"批量,batch\n",
"\n",
"齐普夫定律,Zipf's law\n",
"\n",
"欠拟合,underfitting\n",
"\n",
"情感分析,sentiment analysis\n",
"\n",
"全连接层,fully-connected layer\n",
"\n",
"权重,weight\n",
"\n",
"三元,trigram\n",
"\n",
"上采样,upsample\n",
"\n",
"上下文变量,context variable\n",
"\n",
"上下文窗口,context window\n",
"\n",
"上下文词,context word\n",
"\n",
"上下文向量,context vector\n",
"\n",
"实例/示例,instance\n",
"\n",
"收敛,converge\n",
"\n",
"属性,property\n",
"\n",
"数值方法,numerical method\n",
"\n",
"数据集,dataset\n",
"\n",
"数据示例,data instance\n",
"\n",
"数据样例,data example\n",
"\n",
"顺序分区,sequential partitioning\n",
"\n",
"softmax回归,softmax regression\n",
"\n",
"随机采样,random sampling\n",
"\n",
"损失函数,loss function\n",
"\n",
"双向循环神经网络,bidirectional recurrent neural network\n",
"\n",
"特征,feature\n",
"\n",
"特征图,feature map\n",
"\n",
"特征值,eigenvalue\n",
"\n",
"梯度,gradient\n",
"\n",
"梯度裁剪,gradient clipping\n",
"\n",
"梯度消失,vanishing gradients\n",
"\n",
"填充,padding\n",
"\n",
"跳元模型,skip-gram model\n",
"\n",
"调参,tune hyperparameter\n",
"\n",
"停用词,stop words\n",
"\n",
"通道,channel\n",
"\n",
"凸优化,convex optimization\n",
"\n",
"图像,image\n",
"\n",
"未知词元,unknown token\n",
"\n",
"无偏估计,unbiased estimate\n",
"\n",
"误差,error\n",
"\n",
"小批量,minibatch\n",
"\n",
"小批量梯度,minibatch gradient\n",
"\n",
"线性模型,linear model\n",
"\n",
"线性回归,linear regression\n",
"\n",
"协同过滤,collaborative filtering\n",
"\n",
"学习率,learning rate\n",
"\n",
"训练误差,training error\n",
"\n",
"循环神经网络,recurrent neural network (RNN)\n",
"\n",
"样例,example\n",
"\n",
"一维梯度下降,gradient descent in one-dimensional space\n",
"\n",
"一元,unigram\n",
"\n",
"隐藏变量,hidden variable\n",
"\n",
"隐藏层,hidden layer\n",
"\n",
"优化器,optimizer\n",
"\n",
"语料库,corpus\n",
"\n",
"运算符,operator\n",
"\n",
"自注意力,self-attention\n",
"\n",
"真实值,ground truth\n",
"\n",
"指标,metric\n",
"\n",
"支持向量机,support vector machine\n",
"\n",
"注意力机制,attention mechanism\n",
"\n",
"注意力模型,attention model\n",
"\n",
"注意力提示,attention cue\n",
"\n",
"准确率/精度,accuracy\n"
]
}
],
"metadata": {
"language_info": {
"name": "python"
},
"required_libs": []
},
"nbformat": 4,
"nbformat_minor": 5
}
@@ -0,0 +1,226 @@
{
"cells": [
{
"cell_type": "markdown",
"id": "33970a1a",
"metadata": {
"origin_pos": 0
},
"source": [
"# 使用Amazon EC2实例\n",
":label:`sec_aws`\n",
"\n",
"本节将展示如何在原始Linux机器上安装所有库。回想一下, :numref:`sec_sagemaker`讨论了如何使用Amazon SageMaker,而在云上自己构建实例的成本更低。本演示包括三个步骤。\n",
"\n",
"1. 从AWS EC2请求GPU Linux实例。\n",
"1. 安装CUDA(或使用预装CUDA的Amazon机器映像)。\n",
"1. 安装深度学习框架和其他库以运行本书的代码。\n",
"\n",
"此过程也适用于其他实例(和其他云),尽管需要一些细微的修改。在继续操作之前,你需要创建一个AWS帐户,有关更多详细信息,请参阅 :numref:`sec_sagemaker`。\n",
"\n",
"## 创建和运行EC2实例\n",
"\n",
"登录到你的aws账户后,单击“EC2”(在 :numref:`fig_aws`中用红色方框标记)进入EC2面板。\n",
"\n",
"![打开EC2控制台](../img/aws.png)\n",
":width:`400px`\n",
":label:`fig_aws`\n",
"\n",
":numref:`fig_ec2`显示EC2面板,敏感帐户信息变为灰色。\n",
"\n",
"![EC2面板](../img/ec2.png)\n",
":width:`700px`\n",
":label:`fig_ec2`\n",
"\n",
"### 预置位置\n",
"选择附近的数据中心以降低延迟,例如“Oregon”(俄勒冈)( :numref:`fig_ec2`右上角的红色方框)。如果你位于中国,你可以选择附近的亚太地区,例如首尔或东京。请注意,某些数据中心可能没有GPU实例。\n",
"\n",
"### 增加限制\n",
"\n",
"在选择实例之前,请点击 :numref:`fig_ec2`所示左侧栏中的“Limits”(限制)标签查看是否有数量限制。 :numref:`fig_limits`显示了此类限制的一个例子。账号目前无法按地域打开p2.xlarge实例。如果你需要打开一个或多个实例,请点击“Request limit increase”(请求增加限制)链接,申请更高的实例配额。一般来说,需要一个工作日的时间来处理申请。\n",
"\n",
"![实例数量限制](../img/limits.png)\n",
":width:`700px`\n",
":label:`fig_limits`\n",
"\n",
"### 启动实例\n",
"\n",
"接下来,单击 :numref:`fig_ec2`中红框标记的“Launch Instance”(启动实例)按钮,启动你的实例。\n",
"\n",
"我们首先选择一个合适的Amazon机器映像(Amazon Machine ImageAMI)。在搜索框中输入“ubuntu”( :numref:`fig_ubuntu`中的红色框标记)。\n",
"\n",
"![选择一个AMI](../img/ubuntu-new.png)\n",
":width:`700px`\n",
":label:`fig_ubuntu`\n",
"\n",
"EC2提供了许多不同的实例配置可供选择。对初学者来说,这有时会让人感到困惑。 :numref:`tab_ec2`列出了不同合适的计算机。\n",
"\n",
":不同的EC2实例类型\n",
"\n",
"| Name | GPU | Notes |\n",
"|------|-------------|-------------------------------|\n",
"| g2 | Grid K520 | 过时的 |\n",
"| p2 | Kepler K80 | 旧的GPU但Spot实例通常很便宜 |\n",
"| g3 | Maxwell M60 | 好的平衡 |\n",
"| p3 | Volta V100 | FP16的高性能 |\n",
"| g4 | Turing T4 | FP16/INT8推理优化 |\n",
":label:`tab_ec2`\n",
"\n",
"所有这些服务器都有多种类型,显示了使用的GPU数量。例如,p2.xlarge有1个GPU,而p2.16xlarge有16个GPU和更多内存。有关更多详细信息,请参阅[Amazon EC2 文档](https://aws.amazon.com/ec2/instance-types/)。\n",
"\n",
"![选择一个实例](../img/p2x.png)\n",
":width:`700px`\n",
":label:`fig_p2x`\n",
"\n",
"注意,你应该使用支持GPU的实例以及合适的驱动程序和支持GPU的深度学习框架。否则,你将感受不到使用GPU的任何好处。\n",
"\n",
"到目前为止,我们已经完成了启动EC2实例的七个步骤中的前两个步骤,如 :numref:`fig_disk`顶部所示。在本例中,我们保留“3. Configure Instance”(3. 配置实例)、“5. Add Tags”(5. 添加标签)和“6. Configure Security Group”(6. 配置安全组)步骤的默认配置。点击“4.添加存储”并将默认硬盘大小增加到64GB( :numref:`fig_disk`中的红色框标记)。请注意,CUDA本身已经占用了4GB空间。\n",
"\n",
"![修改硬盘大小](../img/disk.png)\n",
":width:`700px`\n",
":label:`fig_disk`\n",
"\n",
"最后,进入“7. Review”(7. 查看),点击“Launch”(启动),即可启动配置好的实例。系统现在将提示你选择用于访问实例的密钥对。如果你没有密钥对,请在 :numref:`fig_keypair`的第一个下拉菜单中选择“Create a new key pair”(新建密钥对),即可生成密钥对。之后,你可以在此菜单中选择“Choose an existing key pair”(选择现有密钥对),然后选择之前生成的密钥对。单击“Launch Instances”(启动实例)即可启动创建的实例。\n",
"\n",
"![选择一个密钥对](../img/keypair.png)\n",
":width:`500px`\n",
":label:`fig_keypair`\n",
"\n",
"如果生成了新密钥对,请确保下载密钥对并将其存储在安全位置。这是你通过SSH连接到服务器的唯一方式。单击 :numref:`fig_launching`中显示的实例ID可查看该实例的状态。\n",
"\n",
"![单击实例ID](../img/launching.png)\n",
":width:`700px`\n",
":label:`fig_launching`\n",
"\n",
"### 连接到实例\n",
"\n",
"如 :numref:`fig_connect`所示,实例状态变为绿色后,右键单击实例,选择`Connect`(连接)查看实例访问方式。\n",
"\n",
"![查看实例访问方法](../img/connect.png)\n",
":width:`700px`\n",
":label:`fig_connect`\n",
"\n",
"如果这是一个新密钥,它必须是不可公开查看的,SSH才能工作。转到存储`D2L_key.pem`的文件夹,并执行以下命令以使密钥不可公开查看:\n",
"\n",
"```bash\n",
"chmod 400 D2L_key.pem\n",
"```\n",
"\n",
"![查看实例访问和启动方法](../img/chmod.png)\n",
":width:`400px`\n",
":label:`fig_chmod`\n",
"\n",
"现在,复制 :numref:`fig_chmod`下方红色框中的ssh命令并粘贴到命令行:\n",
"\n",
"```bash\n",
"ssh -i \"D2L_key.pem\" ubuntu@ec2-xx-xxx-xxx-xxx.y.compute.amazonaws.com\n",
"```\n",
"\n",
"当命令行提示“Are you sure you want to continue connecting (yes/no)”(“你确定要继续连接吗?(是/否)”)时,输入“yes”并按回车键登录实例。\n",
"\n",
"你的服务器现在已就绪。\n",
"\n",
"## 安装CUDA\n",
"\n",
"在安装CUDA之前,请确保使用最新的驱动程序更新实例。\n",
"\n",
"```bash\n",
"sudo apt-get update && sudo apt-get install -y build-essential git libgfortran3\n",
"```\n",
"\n",
"我们在这里下载CUDA 10.1。访问NVIDIA的[官方存储库](https://developer.nvidia.com/cuda-toolkit-archive) 以找到下载链接,如 :numref:`fig_cuda`中所示。\n",
"\n",
"![查找CUDA 10.1下载地址](../img/cuda101.png)\n",
":width:`500px`\n",
":label:`fig_cuda`\n",
"\n",
"将说明复制粘贴到终端上,以安装CUDA 10.1。\n",
"\n",
"```bash\n",
"# 链接和文件名可能会发生更改,以NVIDIA的官方为准\n",
"wget https://developer.download.nvidia.com/compute/cuda/repos/ubuntu1804/x86_64/cuda-ubuntu1804.pin\n",
"sudo mv cuda-ubuntu1804.pin /etc/apt/preferences.d/cuda-repository-pin-600\n",
"wget http://developer.download.nvidia.com/compute/cuda/10.1/Prod/local_installers/cuda-repo-ubuntu1804-10-1-local-10.1.243-418.87.00_1.0-1_amd64.deb\n",
"sudo dpkg -i cuda-repo-ubuntu1804-10-1-local-10.1.243-418.87.00_1.0-1_amd64.deb\n",
"sudo apt-key add /var/cuda-repo-10-1-local-10.1.243-418.87.00/7fa2af80.pub\n",
"sudo apt-get update\n",
"sudo apt-get -y install cuda\n",
"```\n",
"\n",
"安装程序后,运行以下命令查看GPU:\n",
"\n",
"```bash\n",
"nvidia-smi\n",
"```\n",
"\n",
"最后,将CUDA添加到库路径以帮助其他库找到它。\n",
"\n",
"```bash\n",
"echo \"export LD_LIBRARY_PATH=\\${LD_LIBRARY_PATH}:/usr/local/cuda/lib64\" >> ~/.bashrc\n",
"```\n",
"\n",
"## 安装库以运行代码\n",
"\n",
"要运行本书的代码,只需在EC2实例上为linux用户执行 :ref:`chap_installation`中的步骤,并使用以下提示在远程linux服务器上工作。\n",
"\n",
"* 要在Miniconda安装页面下载bash脚本,请右击下载链接并选择“copy Link address”,然后执行`wget [copied link address]`。\n",
"* 运行`~/miniconda3/bin/conda init`, 你可能需要执行`source~/.bashrc`,而不是关闭并重新打开当前shell。\n",
"\n",
"## 远程运行Jupyter笔记本\n",
"\n",
"要远程运行Jupyter笔记本,你需要使用SSH端口转发。毕竟,云中的服务器没有显示器或键盘。为此,请从你的台式机(或笔记本电脑)登录到你的服务器,如下所示:\n",
"\n",
"```\n",
"# 此命令必须在本地命令行中运行\n",
"ssh -i \"/path/to/key.pem\" ubuntu@ec2-xx-xxx-xxx-xxx.y.compute.amazonaws.com -L 8889:localhost:8888\n",
"```\n",
"\n",
"接下来,转到EC2实例上本书下载的代码所在的位置,然后运行:\n",
"\n",
"```\n",
"conda activate d2l\n",
"jupyter notebook\n",
"```\n",
"\n",
":numref:`fig_jupyter`显示了运行Jupyter笔记本后可能的输出。最后一行是端口8888的URL。\n",
"\n",
"![运行Jupyter Notebook后的输出(最后一行是端口8888的URL](../img/jupyter.png)\n",
":width:`700px`\n",
":label:`fig_jupyter`\n",
"\n",
"由于你使用端口转发到端口8889,请复制 :numref:`fig_jupyter`红色框中的最后一行,将URL中的“8888”替换为“8889”,然后在本地浏览器中打开它。\n",
"\n",
"## 关闭未使用的实例\n",
"\n",
"由于云服务是按使用时间计费的,你应该关闭不使用的实例。请注意,还有其他选择:\n",
"\n",
"* “Stopping”(停止)实例意味着你可以重新启动它。这类似于关闭常规服务器的电源。但是,停止的实例仍将按保留的硬盘空间收取少量费用;\n",
"* “Terminating”(终止)实例将删除与其关联的所有数据。这包括磁盘,因此你不能再次启动它。只有在你知道将来不需要它的情况下才这样做。\n",
"\n",
"如果你想要将该实例用作更多实例的模板,请右击 :numref:`fig_connect`中的例子,然后选择“Image”$\\rightarrow$“Create”以创建该实例的镜像。完成后,选择“实例状态”$\\rightarrow$“终止”以终止实例。下次要使用此实例时,可以按照本节中的步骤基于保存的镜像创建实例。唯一的区别是,在 :numref:`fig_ubuntu`所示的“1.选择AMI”中,你必须使用左侧的“我的AMI”选项来选择你保存的镜像。创建的实例将保留镜像硬盘上存储的信息。例如,你不必重新安装CUDA和其他运行时环境。\n",
"\n",
"## 小结\n",
"\n",
"* 我们可以按需启动和停止实例,而不必购买和制造我们自己的计算机。\n",
"* 在使用支持GPU的深度学习框架之前,我们需要安装CUDA。\n",
"* 我们可以使用端口转发在远程服务器上运行Jupyter笔记本。\n",
"\n",
"## 练习\n",
"\n",
"1. 云提供了便利,但价格并不便宜。了解如何启动[spot实例](https://aws.amazon.com/ec2/spot/)以降低成本。\n",
"1. 尝试使用不同的GPU服务器。它们有多快?\n",
"1. 尝试使用多GPU服务器。你能把事情扩大到什么程度?\n",
"\n",
"[Discussions](https://discuss.d2l.ai/t/5733)\n"
]
}
],
"metadata": {
"language_info": {
"name": "python"
},
"required_libs": []
},
"nbformat": 4,
"nbformat_minor": 5
}
@@ -0,0 +1,182 @@
{
"cells": [
{
"cell_type": "markdown",
"id": "14e728be",
"metadata": {
"origin_pos": 0
},
"source": [
"# 为本书做贡献\n",
":label:`sec_how_to_contribute`\n",
"\n",
"读者们的投稿大大帮助我们改进了本书的质量。\n",
"如果你发现笔误、无效的链接、一些你认为我们遗漏了引文的地方,\n",
"代码看起来不优雅,或者解释不清楚的地方,请回复我们以帮助读者。\n",
"在常规书籍中,两次印刷之间的间隔(即修订笔误的间隔)常常需要几年,\n",
"但这本书的改进通常需要几小时到几天的时间。\n",
"由于版本控制和持续自动集成(CI)测试,这一切颇为高效。\n",
"为此,你需要向gihub存储库提交一个\n",
"[pull request](https://github.com/d2l-ai/d2l-en/pulls)。\n",
"当你的pull请求被作者合并到代码库中时,\n",
"你将成为[贡献者](https://github.com/d2l-ai/d2l-en/graphs/contributors)。\n",
"\n",
"## 提交微小更改\n",
"\n",
"最常见的贡献是编辑一句话或修正笔误。\n",
"我们建议你在[GitHub存储库](https://github.com/d2l-ai/d2l-en)\n",
"中查找源文件,以定位源文件(一个markdown文件)。\n",
"然后单击右上角的“Edit this file”按钮,在markdown文件中进行更改。\n",
"\n",
"![在Github上编辑文件](../img/edit-file.png)\n",
":width:`300px`\n",
":label:`fig_edit_file`\n",
"\n",
"完成后,在页面底部的“Propose file change”(“提交文件修改”)\n",
"面板中填写更改说明,然后单击“Propose file change”按钮。\n",
"它会重定向到新页面以查看你的更改( :numref:`fig_git_createpr`)。\n",
"如果一切正常,你可以通过点击“Create pull request”按钮提交pull请求。\n",
"\n",
"## 大量文本或代码修改\n",
"\n",
"如果你计划修改大量文本或代码,那么你需要更多地了解本书使用的格式。\n",
"源文件基于[markdown格式](https://daringfireball.net/projects/markdown/syntax)\n",
"并通过[d2lbook](http://book.d2l.ai/user/markdown.html)包提供了一组扩展,\n",
"例如引用公式、图像、章节和引文。\n",
"你可以使用任何markdown编辑器打开这些文件并进行更改。\n",
"\n",
"如果你想要更改代码,我们建议你使用Jupyter Notebook打开这些标记文件,\n",
"如 :numref:`sec_jupyter`中所述。\n",
"这样你就可以运行并测试你的更改。\n",
"请记住在提交更改之前清除所有输出,我们的CI系统将执行你更新的部分以生成输出。\n",
"\n",
"某些部分可能支持多个框架实现。如果你添加的新代码块不是使用mxnet,\n",
"请使用`#@tab`来标记代码块的起始行。\n",
"例如`#@tab pytorch`用于一个PyTorch代码块,\n",
"`#@tab tensorflow`用于一个TensorFlow代码块,\n",
"`#@tab paddle`用于一个PaddlePaddle代码块,\n",
"或者`#@tab all`是所有实现的共享代码块。\n",
"你可以参考[d2lbook](http://book.d2l.ai/user/code_tabs.html)包了解更多信息。\n",
"\n",
"## 提交主要更改\n",
"\n",
"我们建议你使用标准的Git流程提交大量修改。\n",
"简而言之,该过程的工作方式如 :numref:`fig_contribute`中所述。\n",
"\n",
"![为这本书作贡献](../img/contribute.svg)\n",
":label:`fig_contribute`\n",
"\n",
"我们将向你详细介绍这些步骤。\n",
"如果你已经熟悉Git,可以跳过本部分。\n",
"在介绍时,我们假设贡献者的用户名为“astonzhang”。\n",
"\n",
"### 安装Git\n",
"\n",
"Git开源书籍描述了[如何安装git](https://git-scm.com/book/en/v2)。\n",
"这通常通过Ubuntu Linux上的`apt install git`\n",
"在MacOS上安装Xcode开发人员工具或使用gihub的\n",
"[桌面客户端](https://desktop.github.com)来实现。\n",
"如果你没有GitHub帐户,则需要注册一个帐户。\n",
"\n",
"### 登录GitHub\n",
"\n",
"在浏览器中输入本书代码存储库的[地址](https://github.com/d2l-ai/d2l-en/)。\n",
"单击 :numref:`fig_git_fork`右上角红色框中的`Fork`按钮,以复制本书的存储库。\n",
"这将是你的副本,你可以随心所欲地更改它。\n",
"\n",
"![代码存储库页面](../img/git-fork.png)\n",
":width:`700px`\n",
":label:`fig_git_fork`\n",
"\n",
"现在,本书的代码库将被分叉(即复制)到你的用户名,\n",
"例如`astonzhang/d2l-en`显示在 :numref:`fig_git_forked`的左上角。\n",
"\n",
"![分叉代码存储库](../img/git-forked.png)\n",
":width:`700px`\n",
":label:`fig_git_forked`\n",
"\n",
"### 克隆存储库\n",
"\n",
"要克隆存储库(即制作本地副本),我们需要获取其存储库地址。\n",
"点击 :numref:`fig_git_clone`中的绿色按钮显示此信息。\n",
"如果你决定将此分支保留更长时间,请确保你的本地副本与主存储库保持最新。\n",
"现在,只需按照 :ref:`chap_installation`中的说明开始。\n",
"主要区别在于,你现在下载的是你自己的存储库分支。\n",
"\n",
"![克隆存储库](../img/git-clone.png)\n",
":width:`700px`\n",
":label:`fig_git_clone`\n",
"\n",
"```\n",
"# 将your_github_username替换为你的github用户名\n",
"git clone https://github.com/your_github_username/d2l-en.git\n",
"```\n",
"\n",
"### 编辑和推送\n",
"\n",
"现在是编辑这本书的时候了。最好按照 :numref:`sec_jupyter`中的说明在Jupyter Notebook中编辑它。进行更改并检查它们是否正常。假设我们已经修改了文件`~/d2l-en/chapter_appendix_tools/how-to-contribute.md`中的一个拼写错误。你可以检查你更改了哪些文件。\n",
"\n",
"此时,Git将提示`chapter_appendix_tools/how-to-contribute.md`文件已被修改。\n",
"\n",
"```\n",
"mylaptop:d2l-en me$ git status\n",
"On branch master\n",
"Your branch is up-to-date with 'origin/master'.\n",
"\n",
"Changes not staged for commit:\n",
" (use \"git add <file>...\" to update what will be committed)\n",
" (use \"git checkout -- <file>...\" to discard changes in working directory)\n",
"\n",
"\tmodified: chapter_appendix_tools/how-to-contribute.md\n",
"```\n",
"\n",
"在确认这是你想要的之后,执行以下命令:\n",
"\n",
"```\n",
"git add chapter_appendix_tools/how-to-contribute.md\n",
"git commit -m 'fix typo in git documentation'\n",
"git push\n",
"```\n",
"\n",
"然后,更改后的代码将位于存储库的个人分支中。要请求添加更改,你必须为本书的官方存储库创建一个Pull请求。\n",
"\n",
"### 提交Pull请求\n",
"\n",
"如 :numref:`fig_git_newpr`所示,进入gihub上的存储库分支,选择“New pull request”。这将打开一个页面,显示你的编辑与本书主存储库中的当前内容之间的更改。\n",
"\n",
"![新的Pull请求](../img/git-newpr.png)\n",
":width:`700px`\n",
":label:`fig_git_newpr`\n",
"\n",
"最后,单击按钮提交Pull请求,如 :numref:`fig_git_createpr`所示。请务必描述你在Pull请求中所做的更改。这将使作者更容易审阅它,并将其与本书合并。根据更改的不同,这可能会立即被接受,也可能会被拒绝,或者更有可能的是,你会收到一些关于更改的反馈。一旦你把它们合并了,你就做完了。\n",
"\n",
"![创建Pull请求](../img/git-createpr.png)\n",
":width:`700px`\n",
":label:`fig_git_createpr`\n",
"\n",
"## 小结\n",
"\n",
"* 你可以使用GitHub为本书做贡献。\n",
"* 你可以直接在GitHub上编辑文件以进行微小更改。\n",
"* 要进行重大更改,请分叉存储库,在本地编辑内容,并在准备好后再做出贡献。\n",
"* 尽量不要提交巨大的Pull请求,因为这会使它们难以理解和合并。最好拆分为几个小一点的。\n",
"\n",
"## 练习\n",
"\n",
"1. 启动并分叉`d2l-ai/d2l-en`存储库。\n",
"1. 如果发现任何需要改进的地方(例如,缺少引用),请提交Pull请求。\n",
"1. 通常更好的做法是使用新分支创建Pull请求。学习如何用[Git分支](https://git-scm.com/book/en/v2/Git-Branching-Branches-in-a-Nutshell)来做这件事。\n",
"\n",
"[Discussions](https://discuss.d2l.ai/t/5730)\n"
]
}
],
"metadata": {
"language_info": {
"name": "python"
},
"required_libs": []
},
"nbformat": 4,
"nbformat_minor": 5
}
@@ -0,0 +1,105 @@
{
"cells": [
{
"cell_type": "markdown",
"id": "1069a72a",
"metadata": {
"origin_pos": 0
},
"source": [
"# `d2l` API 文档\n",
":label:`sec_d2l`\n",
"\n",
"`d2l`包以下成员的实现及其定义和解释部分可在[源文件](https://github.com/d2l-ai/d2l-en/tree/master/d2l)中找到。\n"
]
},
{
"cell_type": "markdown",
"id": "c81dbb31",
"metadata": {
"origin_pos": 2,
"tab": [
"pytorch"
]
},
"source": [
"```eval_rst\n",
".. currentmodule:: d2l.torch\n",
"```\n"
]
},
{
"cell_type": "markdown",
"id": "7f0df80c",
"metadata": {
"origin_pos": 5
},
"source": [
"## 模型\n",
"\n",
"```eval_rst\n",
".. autoclass:: Module\n",
" :members:\n",
"\n",
".. autoclass:: LinearRegressionScratch\n",
" :members:\n",
"\n",
".. autoclass:: LinearRegression\n",
" :members:\n",
"\n",
".. autoclass:: Classification\n",
" :members:\n",
"```\n",
"\n",
"## 数据\n",
"\n",
"```eval_rst\n",
".. autoclass:: DataModule\n",
" :members:\n",
"\n",
".. autoclass:: SyntheticRegressionData\n",
" :members:\n",
"\n",
".. autoclass:: FashionMNIST\n",
" :members:\n",
"```\n",
"\n",
"## 训练\n",
"\n",
"```eval_rst\n",
".. autoclass:: Trainer\n",
" :members:\n",
"\n",
".. autoclass:: SGD\n",
" :members:\n",
"```\n",
"\n",
"## 公用\n",
"\n",
"```eval_rst\n",
".. autofunction:: add_to_class\n",
"\n",
".. autofunction:: cpu\n",
"\n",
".. autofunction:: gpu\n",
"\n",
".. autofunction:: num_gpus\n",
"\n",
".. autoclass:: ProgressBoard\n",
" :members:\n",
"\n",
".. autoclass:: HyperParameters\n",
" :members:\n",
"```\n"
]
}
],
"metadata": {
"language_info": {
"name": "python"
},
"required_libs": []
},
"nbformat": 4,
"nbformat_minor": 5
}
@@ -0,0 +1,35 @@
{
"cells": [
{
"cell_type": "markdown",
"id": "b6a8abfc",
"metadata": {
"origin_pos": 0
},
"source": [
"# 附录:深度学习工具\n",
":label:`chap_appendix_tools`\n",
"\n",
"为了充分利用《动手学深度学习》,本书将在本附录中介绍不同工具,\n",
"例如如何运行这本交互式开源书籍和为本书做贡献。\n",
"\n",
":begin_tab:toc\n",
" - [jupyter](jupyter.ipynb)\n",
" - [sagemaker](sagemaker.ipynb)\n",
" - [aws](aws.ipynb)\n",
" - [selecting-servers-gpus](selecting-servers-gpus.ipynb)\n",
" - [contributing](contributing.ipynb)\n",
" - [d2l](d2l.ipynb)\n",
":end_tab:\n"
]
}
],
"metadata": {
"language_info": {
"name": "python"
},
"required_libs": []
},
"nbformat": 4,
"nbformat_minor": 5
}
@@ -0,0 +1,133 @@
{
"cells": [
{
"cell_type": "markdown",
"id": "5d3957b1",
"metadata": {
"origin_pos": 0
},
"source": [
"# 使用Jupyter Notebook\n",
":label:`sec_jupyter`\n",
"\n",
"本节介绍如何使用Jupyter Notebook编辑和运行本书各章中的代码。确保你已按照 :ref:`chap_installation`中的说明安装了Jupyter并下载了代码。如果你想了解更多关于Jupyter的信息,请参阅其[文档](https://jupyter.readthedocs.io/en/latest/)中的优秀教程。 \n",
"\n",
"## 在本地编辑和运行代码\n",
"\n",
"假设本书代码的本地路径为`xx/yy/d2l-en/`。使用shell将目录更改为此路径(`cd xx/yy/d2l-en`)并运行命令`jupyter notebook`。如果浏览器未自动打开,请打开http://localhost:8888。此时你将看到Jupyter的界面以及包含本书代码的所有文件夹,如 :numref:`fig_jupyter00`所示\n",
"\n",
"![包含本书代码的文件夹](../img/jupyter00.png)\n",
":width:`600px`\n",
":label:`fig_jupyter00`\n",
"\n",
"你可以通过单击网页上显示的文件夹来访问notebook文件。它们通常有后缀“.ipynb”。为了简洁起见,我们创建了一个临时的“test.ipynb”文件。单击后显示的内容如 :numref:`fig_jupyter01`所示。此notebook包括一个标记单元格和一个代码单元格。标记单元格中的内容包括“This Is a Title”和“This is text.”。代码单元包含两行Python代码。 \n",
"\n",
"![“test.ipynb”文件中的markdown和代码块](../img/jupyter01.png)\n",
":width:`600px`\n",
":label:`fig_jupyter01`\n",
"\n",
"双击标记单元格以进入编辑模式。在单元格末尾添加一个新的文本字符串“Hello world.”,如 :numref:`fig_jupyter02`所示。 \n",
"\n",
"![编辑markdown单元格](../img/jupyter02.png)\n",
":width:`600px`\n",
":label:`fig_jupyter02`\n",
"\n",
"如 :numref:`fig_jupyter03`所示,单击菜单栏中的“Cell” $\\rightarrow$ “Run Cells”以运行编辑后的单元格。 \n",
"\n",
"![运行单元格](../img/jupyter03.png)\n",
":width:`600px`\n",
":label:`fig_jupyter03`\n",
"\n",
"运行后,markdown单元格如 :numref:`fig_jupyter04`所示。 \n",
"\n",
"![编辑后的markdown单元格](../img/jupyter04.png)\n",
":width:`600px`\n",
":label:`fig_jupyter04`\n",
"\n",
"接下来,单击代码单元。将最后一行代码后的元素乘以2,如 :numref:`fig_jupyter05`所示。 \n",
"\n",
"![编辑代码单元格](../img/jupyter05.png)\n",
":width:`600px`\n",
":label:`fig_jupyter05`\n",
"\n",
"你还可以使用快捷键(默认情况下为Ctrl+Enter)运行单元格,并从 :numref:`fig_jupyter06`获取输出结果。 \n",
"\n",
"![运行代码单元格以获得输出](../img/jupyter06.png)\n",
":width:`600px`\n",
":label:`fig_jupyter06`\n",
"\n",
"当一个notebook包含更多单元格时,我们可以单击菜单栏中的“Kernel”$\\rightarrow$“Restart & Run All”来运行整个notebook中的所有单元格。通过单击菜单栏中的“Help”$\\rightarrow$“Edit Keyboard Shortcuts”,可以根据你的首选项编辑快捷键。 \n",
"\n",
"## 高级选项\n",
"\n",
"除了本地编辑,还有两件事非常重要:以markdown格式编辑notebook和远程运行Jupyter。当我们想要在更快的服务器上运行代码时,后者很重要。前者很重要,因为Jupyter原生的ipynb格式存储了大量辅助数据,这些数据实际上并不特定于notebook中的内容,主要与代码的运行方式和运行位置有关。这让git感到困惑,并且使得合并贡献非常困难。幸运的是,还有另一种选择——在markdown中进行本地编辑。 \n",
"\n",
"### Jupyter中的Markdown文件\n",
"\n",
"如果你希望对本书的内容有所贡献,则需要在GitHub上修改源文件(md文件,而不是ipynb文件)。使用notedown插件,我们可以直接在Jupyter中修改md格式的notebook。 \n",
"\n",
"首先,安装notedown插件,运行Jupyter Notebook并加载插件:\n",
"\n",
"```\n",
"pip install d2l-notedown # 你可能需要卸载原始notedown\n",
"jupyter notebook --NotebookApp.contents_manager_class='notedown.NotedownContentsManager'\n",
"```\n",
"\n",
"要在运行Jupyter Notebook时默认打开notedown插件,请执行以下操作:首先,生成一个Jupyter Notebook配置文件(如果已经生成了,可以跳过此步骤)。\n",
"\n",
"```\n",
"jupyter notebook --generate-config\n",
"```\n",
"\n",
"然后,在Jupyter Notebook配置文件的末尾添加以下行(对于Linux/macOS,通常位于`~/.jupyter/jupyter_notebook_config.py`):\n",
"\n",
"```\n",
"c.NotebookApp.contents_manager_class = 'notedown.NotedownContentsManager'\n",
"```\n",
"\n",
"在这之后,你只需要运行`jupyter notebook`命令就可以默认打开notedown插件。 \n",
"\n",
"### 在远程服务器上运行Jupyter Notebook\n",
"\n",
"有时,你可能希望在远程服务器上运行Jupyter Notebook,并通过本地计算机上的浏览器访问它。如果本地计算机上安装了Linux或MacOSWindows也可以通过PuTTY等第三方软件支持此功能),则可以使用端口转发:\n",
"\n",
"```\n",
"ssh myserver -L 8888:localhost:8888\n",
"```\n",
"\n",
"以上是远程服务器`myserver`的地址。然后我们可以使用http://localhost:8888 访问运行Jupyter Notebook的远程服务器`myserver`。下一节将详细介绍如何在AWS实例上运行Jupyter Notebook。 \n",
"\n",
"### 执行时间\n",
"\n",
"我们可以使用`ExecuteTime`插件来计算Jupyter Notebook中每个代码单元的执行时间。使用以下命令安装插件:\n",
"\n",
"```\n",
"pip install jupyter_contrib_nbextensions\n",
"jupyter contrib nbextension install --user\n",
"jupyter nbextension enable execute_time/ExecuteTime\n",
"```\n",
"\n",
"## 小结\n",
"\n",
"* 使用Jupyter Notebook工具,我们可以编辑、运行和为本书做贡献。\n",
"* 使用端口转发在远程服务器上运行Jupyter Notebook。\n",
"\n",
"## 练习\n",
"\n",
"1. 在本地计算机上使用Jupyter Notebook编辑并运行本书中的代码。\n",
"1. 使用Jupyter Notebook通过端口转发来远程编辑和运行本书中的代码。\n",
"1. 对于两个方矩阵,测量$\\mathbf{A}^\\top \\mathbf{B}$与$\\mathbf{A} \\mathbf{B}$在$\\mathbb{R}^{1024 \\times 1024}$中的运行时间。哪一个更快?\n",
"\n",
"[Discussions](https://discuss.d2l.ai/t/5731)\n"
]
}
],
"metadata": {
"language_info": {
"name": "python"
},
"required_libs": []
},
"nbformat": 4,
"nbformat_minor": 5
}
@@ -0,0 +1,153 @@
{
"cells": [
{
"cell_type": "markdown",
"id": "b0c43609",
"metadata": {
"origin_pos": 0
},
"source": [
"# 使用Amazon SageMaker\n",
":label:`sec_sagemaker`\n",
"\n",
"深度学习程序可能需要很多计算资源,这很容易超出你的本地计算机所能提供的范围。云计算服务允许你使用功能更强大的计算机更轻松地运行本书的GPU密集型代码。本节将介绍如何使用Amazon SageMaker运行本书的代码。\n",
"\n",
"## 注册\n",
"\n",
"首先,我们需要在注册一个帐户https://aws.amazon.com/。 为了增加安全性,鼓励使用双因素身份验证。设置详细的计费和支出警报也是一个好主意,以避免任何意外,例如,当忘记停止运行实例时。登录AWS帐户后,转到[console](http://console.aws.amazon.com/)并搜索“Amazon SageMaker”(参见 :numref:`fig_sagemaker`),然后单击它打开SageMaker面板。\n",
"\n",
"![搜索并打开SageMaker面板](../img/sagemaker.png)\n",
":width:`300px`\n",
":label:`fig_sagemaker`\n",
"\n",
"## 创建SageMaker实例\n",
"\n",
"接下来,让我们创建一个notebook实例,如 :numref:`fig_sagemaker-create`所示。\n",
"\n",
"![创建一个SageMaker实例](../img/sagemaker-create.png)\n",
":width:`400px`\n",
":label:`fig_sagemaker-create`\n",
"\n",
"SageMaker提供多个具有不同计算能力和价格的[实例类型](https://aws.amazon.com/sagemaker/pricing/instance-types/)。创建notebook实例时,可以指定其名称和类型。在 :numref:`fig_sagemaker-create-2`中,我们选择`ml.p3.2xlarge`:使用一个Tesla V100 GPU和一个8核CPU,这个实例的性能足够本书的大部分内容使用。\n",
"\n",
"![选择实例类型](../img/sagemaker-create-2.png)\n",
":width:`400px`\n",
":label:`fig_sagemaker-create-2`\n"
]
},
{
"cell_type": "markdown",
"id": "87a915ca",
"metadata": {
"origin_pos": 2,
"tab": [
"pytorch"
]
},
"source": [
"用于与SageMaker一起运行的ipynb格式的整本书可从https://github.com/d2l-ai/d2l-pytorch-sagemaker获得。\n",
"我们可以指定此GitHub存储库URL :numref:`fig_sagemaker-create-3`),以允许SageMaker在创建实例时克隆它。\n"
]
},
{
"cell_type": "markdown",
"id": "061d3b04",
"metadata": {
"origin_pos": 4
},
"source": [
"![指定GitHub存储库](../img/sagemaker-create-3.png)\n",
":width:`400px`\n",
":label:`fig_sagemaker-create-3`\n",
"\n",
"## 运行和停止实例\n",
"\n",
"创建实例可能需要几分钟的时间。当实例准备就绪时,单击它旁边的“Open Jupyter”链接( :numref:`fig_sagemaker-open`),以便你可以在此实例上编辑并运行本书的所有Jupyter Notebook(类似于 :numref:`sec_jupyter`中的步骤)。\n",
"\n",
"![在创建的SageMaker实例上打开Jupyter](../img/sagemaker-open.png)\n",
":width:`400px`\n",
":label:`fig_sagemaker-open`\n",
"\n",
"完成工作后,不要忘记停止实例以避免进一步收费( :numref:`fig_sagemaker-stop`)。\n",
"\n",
"![停止SageMaker实例](../img/sagemaker-stop.png)\n",
":width:`300px`\n",
":label:`fig_sagemaker-stop`\n",
"\n",
"## 更新Notebook\n"
]
},
{
"cell_type": "markdown",
"id": "f55e7f4e",
"metadata": {
"origin_pos": 6,
"tab": [
"pytorch"
]
},
"source": [
"这本开源书的notebook将定期在GitHub上的[d2l-ai/d2l-pytorch-sagemaker](https://github.com/d2l-ai/d2l-pytorch-sagemaker)存储库中更新。要更新至最新版本,你可以在SageMaker实例( :numref:`fig_sagemaker-terminal`)上打开终端。\n"
]
},
{
"cell_type": "markdown",
"id": "f2b7db7b",
"metadata": {
"origin_pos": 8
},
"source": [
"![在SageMaker实例上打开终端](../img/sagemaker-terminal.png)\n",
":width:`300px`\n",
":label:`fig_sagemaker-terminal`\n",
"\n",
"你可能希望在从远程存储库提取更新之前提交本地更改。否则,只需在终端中使用以下命令放弃所有本地更改:\n"
]
},
{
"cell_type": "markdown",
"id": "b1900934",
"metadata": {
"origin_pos": 10,
"tab": [
"pytorch"
]
},
"source": [
"```bash\n",
"cd SageMaker/d2l-pytorch-sagemaker/\n",
"git reset --hard\n",
"git pull\n",
"```\n"
]
},
{
"cell_type": "markdown",
"id": "5060f222",
"metadata": {
"origin_pos": 12
},
"source": [
"## 小结\n",
"\n",
"* 我们可以使用Amazon SageMaker创建一个GPU的notebook实例来运行本书的密集型代码。\n",
"* 我们可以通过Amazon SageMaker实例上的终端更新notebooks。\n",
"\n",
"## 练习\n",
"\n",
"1. 使用Amazon SageMaker编辑并运行任何需要GPU的部分。\n",
"1. 打开终端以访问保存本书所有notebooks的本地目录。\n",
"\n",
"[Discussions](https://discuss.d2l.ai/t/5732)\n"
]
}
],
"metadata": {
"language_info": {
"name": "python"
},
"required_libs": []
},
"nbformat": 4,
"nbformat_minor": 5
}
@@ -0,0 +1,82 @@
{
"cells": [
{
"cell_type": "markdown",
"id": "c34c875c",
"metadata": {
"origin_pos": 0
},
"source": [
"# 选择服务器和GPU\n",
":label:`sec_buy_gpu`\n",
"\n",
"深度学习训练通常需要大量的计算。目前,GPU是深度学习最具成本效益的硬件加速器。与CPU相比,GPU更便宜,性能更高,通常超过一个数量级。此外,一台服务器可以支持多个GPU,高端服务器最多支持8个GPU。更典型的数字是工程工作站最多4个GPU,这是因为热量、冷却和电源需求会迅速增加,超出办公楼所能支持的范围。对于更大的部署,云计算(例如亚马逊的[P3](https://aws.amazon.com/ec2/instance-types/p3/)和[G4](https://aws.amazon.com/blogs/aws/in-the-works-ec2-instances-g4-with-nvidia-t4-gpus/)实例)是一个更实用的解决方案。\n",
"\n",
"## 选择服务器\n",
"\n",
"通常不需要购买具有多个线程的高端CPU,因为大部分计算都发生在GPU上。这就是说,由于Python中的全局解释器锁(GIL),CPU的单线程性能在有4-8个GPU的情况下可能很重要。所有的条件都是一样的,这意味着核数较少但时钟频率较高的CPU可能是更经济的选择。例如,当在6核4GHz和8核3.5GHz CPU之间进行选择时,前者更可取,即使其聚合速度较低。一个重要的考虑因素是,GPU使用大量的电能,从而释放大量的热量。这需要非常好的冷却和足够大的机箱来容纳GPU。如有可能,请遵循以下指南:\n",
"\n",
"1. **电源**。GPU使用大量的电源。每个设备预计高达350W(检查显卡的*峰值需求*而不是一般需求,因为高效代码可能会消耗大量能源)。如果电源不能满足需求,系统会变得不稳定。\n",
"1. **机箱尺寸**。GPU很大,辅助电源连接器通常需要额外的空间。此外,大型机箱更容易冷却。\n",
"1. **GPU散热**。如果有大量的GPU,可能需要投资水冷。此外,即使风扇较少,也应以“公版设计”为目标,因为它们足够薄,可以在设备之间进气。当使用多风扇GPU,安装多个GPU时,它可能太厚而无法获得足够的空气。\n",
"1. **PCIe插槽**。在GPU之间来回移动数据(以及在GPU之间交换数据)需要大量带宽。建议使用16通道的PCIe 3.0插槽。当安装了多个GPU时,请务必仔细阅读主板说明,以确保在同时使用多个GPU时16$\\times$带宽仍然可用,并且使用的是PCIe3.0,而不是用于附加插槽的PCIe2.0。在安装多个GPU的情况下,一些主板的带宽降级到8$\\times$甚至4$\\times$。这部分是由于CPU提供的PCIe通道数量限制。\n",
"\n",
"简而言之,以下是构建深度学习服务器的一些建议。\n",
"\n",
"* **初学者**。购买低功耗的低端GPU(适合深度学习的廉价游戏GPU,功耗150-200W)。如果幸运的话,大家现在常用的计算机将支持它。\n",
"* **1个GPU**。一个4核的低端CPU就足够了,大多数主板也足够了。以至少32 GB的DRAM为目标,投资SSD进行本地数据访问。600W的电源应足够。买一个有很多风扇的GPU。\n",
"* **2个GPU**。一个4-6核的低端CPU就足够了。可以考虑64 GB的DRAM并投资于SSD。两个高端GPU将需要1000瓦的功率。对于主板,请确保它们具有*两个*PCIe 3.0 x16插槽。如果可以,请使用PCIe 3.0 x16插槽之间有两个可用空间(60毫米间距)的主板,以提供额外的空气。在这种情况下,购买两个具有大量风扇的GPU。\n",
"* **4个GPU**。确保购买的CPU具有相对较快的单线程速度(即较高的时钟频率)。可能需要具有更多PCIe通道的CPU,例如AMD Threadripper。可能需要相对昂贵的主板才能获得4个PCIe 3.0 x16插槽,因为它们可能需要一个PLX来多路复用PCIe通道。购买带有公版设计的GPU,这些GPU很窄,并且让空气进入GPU之间。需要一个1600-2000W的电源,而办公室的插座可能不支持。此服务器可能在运行时*声音很大,很热*。不想把它放在桌子下面。建议使用128 GB的DRAM。获取一个用于本地存储的SSD1-2 TB NVMe)和RAID配置的硬盘来存储数据。\n",
"* **8 GPU**。需要购买带有多个冗余电源的专用多GPU服务器机箱(例如,每个电源为1600W时为2+1)。这将需要双插槽服务器CPU、256 GB ECC DRAM、快速网卡(建议使用10 GBE),并且需要检查服务器是否支持GPU的*物理外形*。用户GPU和服务器GPU之间的气流和布线位置存在显著差异(例如RTX 2080和Tesla V100)。这意味着可能无法在服务器中安装消费级GPU,因为电源线间隙不足或缺少合适的接线(本书一位合著者痛苦地发现了这一点)。\n",
"\n",
"## 选择GPU\n",
"\n",
"目前,AMD和NVIDIA是专用GPU的两大主要制造商。NVIDIA是第一个进入深度学习领域的公司,通过CUDA为深度学习框架提供更好的支持。因此,大多数买家选择NVIDIA GPU。\n",
"\n",
"NVIDIA提供两种类型的GPU,针对个人用户(例如,通过GTX和RTX系列)和企业用户(通过其Tesla系列)。这两种类型的GPU提供了相当的计算能力。但是,企业用户GPU通常使用强制(被动)冷却、更多内存和ECC(纠错)内存。这些GPU更适用于数据中心,通常成本是消费者GPU的十倍。\n",
"\n",
"如果是一个拥有100个服务器的大公司,则应该考虑英伟达Tesla系列,或者在云中使用GPU服务器。对于实验室或10+服务器的中小型公司,英伟达RTX系列可能是最具成本效益的,可以购买超微或华硕机箱的预配置服务器,这些服务器可以有效地容纳4-8个GPU。\n",
"\n",
"GPU供应商通常每一到两年发布一代,例如2017年发布的GTX 1000Pascal)系列和2019年发布的RTX 2000(Turing)系列。每个系列都提供几种不同的型号,提供不同的性能级别。GPU性能主要是以下三个参数的组合:\n",
"\n",
"1. **计算能力**。通常大家会追求32位浮点计算能力。16位浮点训练(FP16)也进入主流。如果只对预测感兴趣,还可以使用8位整数。最新一代图灵GPU提供4-bit加速。不幸的是,目前训练低精度网络的算法还没有普及;\n",
"1. **内存大小**。随着模型变大或训练期间使用的批量变大,将需要更多的GPU内存。检查HBM2(高带宽内存)与GDDR6(图形DDR)内存。HBM2速度更快,但成本更高;\n",
"1. **内存带宽**。当有足够的内存带宽时,才能最大限度地利用计算能力。如果使用GDDR6,请追求宽内存总线。\n",
"\n",
"对于大多数用户,只需看看计算能力就足够了。请注意,许多GPU提供不同类型的加速。例如,NVIDIA的Tensor Cores将操作符子集的速度提高了5$\\times$。确保所使用的库支持这一点。GPU内存应不小于4GB(8GB更好)。尽量避免将GPU也用于显示GUI(改用内置显卡)。如果无法避免,请添加额外的2GB RAM以确保安全。\n",
"\n",
":numref:`fig_flopsvsprice`比较了各种GTX 900、GTX 1000和RTX 2000系列的(GFlops)和价格(Price)。价格是维基百科上的建议价格。\n",
"\n",
"![浮点计算能力和价格比较](../img/flopsvsprice.svg)\n",
":label:`fig_flopsvsprice`\n",
"\n",
"由上图,可以看出很多事情:\n",
"\n",
"1. 在每个系列中,价格和性能大致成比例。Titan因拥有大GPU内存而有相当的溢价。然而,通过比较980 Ti和1080 Ti可以看出,较新型号具有更好的成本效益。RTX 2000系列的价格似乎没有多大提高。然而,它们提供了更优秀的低精度性能(FP16、INT8和INT4);\n",
"2. GTX 1000系列的性价比大约是900系列的两倍;\n",
"3. 对于RTX 2000系列,浮点计算能力是价格的“仿射”函数。\n",
"\n",
"![浮点计算能力和能耗](../img/wattvsprice.svg)\n",
":label:`fig_wattvsprice`\n",
"\n",
":numref:`fig_wattvsprice`显示了能耗与计算量基本成线性关系。其次,后一代更有效率。这似乎与对应于RTX 2000系列的图表相矛盾。然而,这是TensorCore不成比例的大能耗的结果。\n",
"\n",
"## 小结\n",
"\n",
"* 在构建服务器时,请注意电源、PCIe总线通道、CPU单线程速度和散热。\n",
"* 如果可能,应该购买最新一代的GPU。\n",
"* 使用云进行大型部署。\n",
"* 高密度服务器可能不与所有GPU兼容。在购买之前,请检查一下机械和散热规格。\n",
"* 为提高效率,请使用FP16或更低的精度。\n"
]
}
],
"metadata": {
"language_info": {
"name": "python"
},
"required_libs": []
},
"nbformat": 4,
"nbformat_minor": 5
}
@@ -0,0 +1,967 @@
{
"cells": [
{
"cell_type": "markdown",
"id": "9122670f",
"metadata": {
"origin_pos": 0
},
"source": [
"# 注意力提示\n",
":label:`sec_attention-cues`\n",
"\n",
"感谢读者对本书的关注,因为读者的注意力是一种稀缺的资源:\n",
"此刻读者正在阅读本书(而忽略了其他的书),\n",
"因此读者的注意力是用机会成本(与金钱类似)来支付的。\n",
"为了确保读者现在投入的注意力是值得的,\n",
"作者们尽全力(全部的注意力)创作一本好书。\n",
"\n",
"自经济学研究稀缺资源分配以来,人们正处在“注意力经济”时代,\n",
"即人类的注意力被视为可以交换的、有限的、有价值的且稀缺的商品。\n",
"许多商业模式也被开发出来去利用这一点:\n",
"在音乐或视频流媒体服务上,人们要么消耗注意力在广告上,要么付钱来隐藏广告;\n",
"为了在网络游戏世界的成长,人们要么消耗注意力在游戏战斗中,\n",
"从而帮助吸引新的玩家,要么付钱立即变得强大。\n",
"总之,注意力不是免费的。\n",
"\n",
"注意力是稀缺的,而环境中的干扰注意力的信息却并不少。\n",
"比如人类的视觉神经系统大约每秒收到$10^8$位的信息,\n",
"这远远超过了大脑能够完全处理的水平。\n",
"幸运的是,人类的祖先已经从经验(也称为数据)中认识到\n",
"“并非感官的所有输入都是一样的”。\n",
"在整个人类历史中,这种只将注意力引向感兴趣的一小部分信息的能力,\n",
"使人类的大脑能够更明智地分配资源来生存、成长和社交,\n",
"例如发现天敌、找寻食物和伴侣。\n",
"\n",
"## 生物学中的注意力提示\n",
"\n",
"注意力是如何应用于视觉世界中的呢?\n",
"这要从当今十分普及的*双组件*(two-component)的框架开始讲起:\n",
"这个框架的出现可以追溯到19世纪90年代的威廉·詹姆斯,\n",
"他被认为是“美国心理学之父” :cite:`James.2007`。\n",
"在这个框架中,受试者基于*非自主性提示*和*自主性提示*\n",
"有选择地引导注意力的焦点。\n",
"\n",
"非自主性提示是基于环境中物体的突出性和易见性。\n",
"想象一下,假如我们面前有五个物品:\n",
"一份报纸、一篇研究论文、一杯咖啡、一本笔记本和一本书,\n",
"就像 :numref:`fig_eye-coffee`。\n",
"所有纸制品都是黑白印刷的,但咖啡杯是红色的。\n",
"换句话说,这个咖啡杯在这种视觉环境中是突出和显眼的,\n",
"不由自主地引起人们的注意。\n",
"所以我们会把视力最敏锐的地方放到咖啡上,\n",
"如 :numref:`fig_eye-coffee`所示。\n",
"\n",
"![由于突出性的非自主性提示(红杯子),注意力不自主地指向了咖啡杯](../img/eye-coffee.svg)\n",
":width:`400px`\n",
":label:`fig_eye-coffee`\n",
"\n",
"喝咖啡后,我们会变得兴奋并想读书,\n",
"所以转过头,重新聚焦眼睛,然后看看书,\n",
"就像 :numref:`fig_eye-book`中描述那样。\n",
"与 :numref:`fig_eye-coffee`中由于突出性导致的选择不同,\n",
"此时选择书是受到了认知和意识的控制,\n",
"因此注意力在基于自主性提示去辅助选择时将更为谨慎。\n",
"受试者的主观意愿推动,选择的力量也就更强大。\n",
"\n",
"![依赖于任务的意志提示(想读一本书),注意力被自主引导到书上](../img/eye-book.svg)\n",
":width:`400px`\n",
":label:`fig_eye-book`\n",
"\n",
"## 查询、键和值\n",
"\n",
"自主性的与非自主性的注意力提示解释了人类的注意力的方式,\n",
"下面来看看如何通过这两种注意力提示,\n",
"用神经网络来设计注意力机制的框架,\n",
"\n",
"首先,考虑一个相对简单的状况,\n",
"即只使用非自主性提示。\n",
"要想将选择偏向于感官输入,\n",
"则可以简单地使用参数化的全连接层,\n",
"甚至是非参数化的最大汇聚层或平均汇聚层。\n",
"\n",
"因此,“是否包含自主性提示”将注意力机制与全连接层或汇聚层区别开来。\n",
"在注意力机制的背景下,自主性提示被称为*查询*(query)。\n",
"给定任何查询,注意力机制通过*注意力汇聚*(attention pooling\n",
"将选择引导至*感官输入*sensory inputs,例如中间特征表示)。\n",
"在注意力机制中,这些感官输入被称为*值*(value)。\n",
"更通俗的解释,每个值都与一个*键*(key)配对,\n",
"这可以想象为感官输入的非自主提示。\n",
"如 :numref:`fig_qkv`所示,可以通过设计注意力汇聚的方式,\n",
"便于给定的查询(自主性提示)与键(非自主性提示)进行匹配,\n",
"这将引导得出最匹配的值(感官输入)。\n",
"\n",
"![注意力机制通过注意力汇聚将*查询*(自主性提示)和*键*(非自主性提示)结合在一起,实现对*值*(感官输入)的选择倾向](../img/qkv.svg)\n",
":label:`fig_qkv`\n",
"\n",
"鉴于上面所提框架在 :numref:`fig_qkv`中的主导地位,\n",
"因此这个框架下的模型将成为本章的中心。\n",
"然而,注意力机制的设计有许多替代方案。\n",
"例如可以设计一个不可微的注意力模型,\n",
"该模型可以使用强化学习方法 :cite:`Mnih.Heess.Graves.ea.2014`进行训练。\n",
"\n",
"## 注意力的可视化\n",
"\n",
"平均汇聚层可以被视为输入的加权平均值,\n",
"其中各输入的权重是一样的。\n",
"实际上,注意力汇聚得到的是加权平均的总和值,\n",
"其中权重是在给定的查询和不同的键之间计算得出的。\n"
]
},
{
"cell_type": "code",
"execution_count": 1,
"id": "58a7898f",
"metadata": {
"execution": {
"iopub.execute_input": "2023-08-18T07:03:03.521946Z",
"iopub.status.busy": "2023-08-18T07:03:03.521507Z",
"iopub.status.idle": "2023-08-18T07:03:05.621623Z",
"shell.execute_reply": "2023-08-18T07:03:05.620583Z"
},
"origin_pos": 2,
"tab": [
"pytorch"
]
},
"outputs": [],
"source": [
"import torch\n",
"from d2l import torch as d2l"
]
},
{
"cell_type": "markdown",
"id": "6e9e92fc",
"metadata": {
"origin_pos": 5
},
"source": [
"为了可视化注意力权重,需要定义一个`show_heatmaps`函数。\n",
"其输入`matrices`的形状是\n",
"(要显示的行数,要显示的列数,查询的数目,键的数目)。\n"
]
},
{
"cell_type": "code",
"execution_count": 2,
"id": "3c30d535",
"metadata": {
"execution": {
"iopub.execute_input": "2023-08-18T07:03:05.627152Z",
"iopub.status.busy": "2023-08-18T07:03:05.626530Z",
"iopub.status.idle": "2023-08-18T07:03:05.634951Z",
"shell.execute_reply": "2023-08-18T07:03:05.633763Z"
},
"origin_pos": 6,
"tab": [
"pytorch"
]
},
"outputs": [],
"source": [
"#@save\n",
"def show_heatmaps(matrices, xlabel, ylabel, titles=None, figsize=(2.5, 2.5),\n",
" cmap='Reds'):\n",
" \"\"\"显示矩阵热图\"\"\"\n",
" d2l.use_svg_display()\n",
" num_rows, num_cols = matrices.shape[0], matrices.shape[1]\n",
" fig, axes = d2l.plt.subplots(num_rows, num_cols, figsize=figsize,\n",
" sharex=True, sharey=True, squeeze=False)\n",
" for i, (row_axes, row_matrices) in enumerate(zip(axes, matrices)):\n",
" for j, (ax, matrix) in enumerate(zip(row_axes, row_matrices)):\n",
" pcm = ax.imshow(matrix.detach().numpy(), cmap=cmap)\n",
" if i == num_rows - 1:\n",
" ax.set_xlabel(xlabel)\n",
" if j == 0:\n",
" ax.set_ylabel(ylabel)\n",
" if titles:\n",
" ax.set_title(titles[j])\n",
" fig.colorbar(pcm, ax=axes, shrink=0.6);"
]
},
{
"cell_type": "markdown",
"id": "f48978d9",
"metadata": {
"origin_pos": 7
},
"source": [
"下面使用一个简单的例子进行演示。\n",
"在本例子中,仅当查询和键相同时,注意力权重为1,否则为0。\n"
]
},
{
"cell_type": "code",
"execution_count": 3,
"id": "bbabe8f3",
"metadata": {
"execution": {
"iopub.execute_input": "2023-08-18T07:03:05.640096Z",
"iopub.status.busy": "2023-08-18T07:03:05.639355Z",
"iopub.status.idle": "2023-08-18T07:03:05.886353Z",
"shell.execute_reply": "2023-08-18T07:03:05.885235Z"
},
"origin_pos": 8,
"tab": [
"pytorch"
]
},
"outputs": [
{
"data": {
"image/svg+xml": [
"<?xml version=\"1.0\" encoding=\"utf-8\" standalone=\"no\"?>\n",
"<!DOCTYPE svg PUBLIC \"-//W3C//DTD SVG 1.1//EN\"\n",
" \"http://www.w3.org/Graphics/SVG/1.1/DTD/svg11.dtd\">\n",
"<svg xmlns:xlink=\"http://www.w3.org/1999/xlink\" width=\"193.35825pt\" height=\"156.35625pt\" viewBox=\"0 0 193.35825 156.35625\" xmlns=\"http://www.w3.org/2000/svg\" version=\"1.1\">\n",
" <metadata>\n",
" <rdf:RDF xmlns:dc=\"http://purl.org/dc/elements/1.1/\" xmlns:cc=\"http://creativecommons.org/ns#\" xmlns:rdf=\"http://www.w3.org/1999/02/22-rdf-syntax-ns#\">\n",
" <cc:Work>\n",
" <dc:type rdf:resource=\"http://purl.org/dc/dcmitype/StillImage\"/>\n",
" <dc:date>2023-08-18T07:03:05.823629</dc:date>\n",
" <dc:format>image/svg+xml</dc:format>\n",
" <dc:creator>\n",
" <cc:Agent>\n",
" <dc:title>Matplotlib v3.5.1, https://matplotlib.org/</dc:title>\n",
" </cc:Agent>\n",
" </dc:creator>\n",
" </cc:Work>\n",
" </rdf:RDF>\n",
" </metadata>\n",
" <defs>\n",
" <style type=\"text/css\">*{stroke-linejoin: round; stroke-linecap: butt}</style>\n",
" </defs>\n",
" <g id=\"figure_1\">\n",
" <g id=\"patch_1\">\n",
" <path d=\"M -0 156.35625 \n",
"L 193.35825 156.35625 \n",
"L 193.35825 0 \n",
"L -0 0 \n",
"L -0 156.35625 \n",
"z\n",
"\" style=\"fill: none\"/>\n",
" </g>\n",
" <g id=\"axes_1\">\n",
" <g id=\"patch_2\">\n",
" <path d=\"M 34.240625 118.8 \n",
"L 145.840625 118.8 \n",
"L 145.840625 7.2 \n",
"L 34.240625 7.2 \n",
"z\n",
"\" style=\"fill: #ffffff\"/>\n",
" </g>\n",
" <g clip-path=\"url(#p3a36c213e6)\">\n",
" <image xlink:href=\"data:image/png;base64,\n",
"iVBORw0KGgoAAAANSUhEUgAAAHAAAABwCAYAAADG4PRLAAABc0lEQVR4nO3dsW3CUBhGURPRJSNkPxgpA2YAGmqnpfRDMnnXnFO7sHT1N58s+bTeb+vCVK6f35uf/djxPXgBAeMEjBMwTsA4AeMEjBMwTsA4AePO//0C72JkHvu5/25+1gXGCRgnYJyAcQLGCRgnYJyAcQLGCRhnSnvSyDS2LGPz2AgXGCdgnIBxAsYJGCdgnIBxAsYJGCdgnIBxttAHe336tycXGCdgnIBxAsYJGCdgnIBxAsYJGCdg3OGntOI8NsIFxgkYJ2CcgHECxgkYJ2CcgHECxgkYl5zSjj6PjXCBcQLGCRgnYJyAcQLGCRgnYJyAcQLGTTOlmcee4wLjBIwTME7AOAHjBIwTME7AOAHjBIzbbUqb5b8KR+cC4wSMEzBOwDgB4wSMEzBOwDgB4wSMEzBuaAv16d98XGCcgHECxgkYJ2CcgHECxgkYJ2CcgHGny/K1bn3YPDYfFxgnYJyAcQLGCRgnYJyAcQLGCRgnYNwfkEghRAiKZdAAAAAASUVORK5CYII=\" id=\"image8a96b358df\" transform=\"scale(1 -1)translate(0 -112)\" x=\"34.240625\" y=\"-6.8\" width=\"112\" height=\"112\"/>\n",
" </g>\n",
" <g id=\"matplotlib.axis_1\">\n",
" <g id=\"xtick_1\">\n",
" <g id=\"line2d_1\">\n",
" <defs>\n",
" <path id=\"mdcf2e5a8c4\" d=\"M 0 0 \n",
"L 0 3.5 \n",
"\" style=\"stroke: #000000; stroke-width: 0.8\"/>\n",
" </defs>\n",
" <g>\n",
" <use xlink:href=\"#mdcf2e5a8c4\" x=\"39.820625\" y=\"118.8\" style=\"stroke: #000000; stroke-width: 0.8\"/>\n",
" </g>\n",
" </g>\n",
" <g id=\"text_1\">\n",
" <!-- 0 -->\n",
" <g transform=\"translate(36.639375 133.398438)scale(0.1 -0.1)\">\n",
" <defs>\n",
" <path id=\"DejaVuSans-30\" d=\"M 2034 4250 \n",
"Q 1547 4250 1301 3770 \n",
"Q 1056 3291 1056 2328 \n",
"Q 1056 1369 1301 889 \n",
"Q 1547 409 2034 409 \n",
"Q 2525 409 2770 889 \n",
"Q 3016 1369 3016 2328 \n",
"Q 3016 3291 2770 3770 \n",
"Q 2525 4250 2034 4250 \n",
"z\n",
"M 2034 4750 \n",
"Q 2819 4750 3233 4129 \n",
"Q 3647 3509 3647 2328 \n",
"Q 3647 1150 3233 529 \n",
"Q 2819 -91 2034 -91 \n",
"Q 1250 -91 836 529 \n",
"Q 422 1150 422 2328 \n",
"Q 422 3509 836 4129 \n",
"Q 1250 4750 2034 4750 \n",
"z\n",
"\" transform=\"scale(0.015625)\"/>\n",
" </defs>\n",
" <use xlink:href=\"#DejaVuSans-30\"/>\n",
" </g>\n",
" </g>\n",
" </g>\n",
" <g id=\"xtick_2\">\n",
" <g id=\"line2d_2\">\n",
" <g>\n",
" <use xlink:href=\"#mdcf2e5a8c4\" x=\"95.620625\" y=\"118.8\" style=\"stroke: #000000; stroke-width: 0.8\"/>\n",
" </g>\n",
" </g>\n",
" <g id=\"text_2\">\n",
" <!-- 5 -->\n",
" <g transform=\"translate(92.439375 133.398438)scale(0.1 -0.1)\">\n",
" <defs>\n",
" <path id=\"DejaVuSans-35\" d=\"M 691 4666 \n",
"L 3169 4666 \n",
"L 3169 4134 \n",
"L 1269 4134 \n",
"L 1269 2991 \n",
"Q 1406 3038 1543 3061 \n",
"Q 1681 3084 1819 3084 \n",
"Q 2600 3084 3056 2656 \n",
"Q 3513 2228 3513 1497 \n",
"Q 3513 744 3044 326 \n",
"Q 2575 -91 1722 -91 \n",
"Q 1428 -91 1123 -41 \n",
"Q 819 9 494 109 \n",
"L 494 744 \n",
"Q 775 591 1075 516 \n",
"Q 1375 441 1709 441 \n",
"Q 2250 441 2565 725 \n",
"Q 2881 1009 2881 1497 \n",
"Q 2881 1984 2565 2268 \n",
"Q 2250 2553 1709 2553 \n",
"Q 1456 2553 1204 2497 \n",
"Q 953 2441 691 2322 \n",
"L 691 4666 \n",
"z\n",
"\" transform=\"scale(0.015625)\"/>\n",
" </defs>\n",
" <use xlink:href=\"#DejaVuSans-35\"/>\n",
" </g>\n",
" </g>\n",
" </g>\n",
" <g id=\"text_3\">\n",
" <!-- Keys -->\n",
" <g transform=\"translate(78.371094 147.076563)scale(0.1 -0.1)\">\n",
" <defs>\n",
" <path id=\"DejaVuSans-4b\" d=\"M 628 4666 \n",
"L 1259 4666 \n",
"L 1259 2694 \n",
"L 3353 4666 \n",
"L 4166 4666 \n",
"L 1850 2491 \n",
"L 4331 0 \n",
"L 3500 0 \n",
"L 1259 2247 \n",
"L 1259 0 \n",
"L 628 0 \n",
"L 628 4666 \n",
"z\n",
"\" transform=\"scale(0.015625)\"/>\n",
" <path id=\"DejaVuSans-65\" d=\"M 3597 1894 \n",
"L 3597 1613 \n",
"L 953 1613 \n",
"Q 991 1019 1311 708 \n",
"Q 1631 397 2203 397 \n",
"Q 2534 397 2845 478 \n",
"Q 3156 559 3463 722 \n",
"L 3463 178 \n",
"Q 3153 47 2828 -22 \n",
"Q 2503 -91 2169 -91 \n",
"Q 1331 -91 842 396 \n",
"Q 353 884 353 1716 \n",
"Q 353 2575 817 3079 \n",
"Q 1281 3584 2069 3584 \n",
"Q 2775 3584 3186 3129 \n",
"Q 3597 2675 3597 1894 \n",
"z\n",
"M 3022 2063 \n",
"Q 3016 2534 2758 2815 \n",
"Q 2500 3097 2075 3097 \n",
"Q 1594 3097 1305 2825 \n",
"Q 1016 2553 972 2059 \n",
"L 3022 2063 \n",
"z\n",
"\" transform=\"scale(0.015625)\"/>\n",
" <path id=\"DejaVuSans-79\" d=\"M 2059 -325 \n",
"Q 1816 -950 1584 -1140 \n",
"Q 1353 -1331 966 -1331 \n",
"L 506 -1331 \n",
"L 506 -850 \n",
"L 844 -850 \n",
"Q 1081 -850 1212 -737 \n",
"Q 1344 -625 1503 -206 \n",
"L 1606 56 \n",
"L 191 3500 \n",
"L 800 3500 \n",
"L 1894 763 \n",
"L 2988 3500 \n",
"L 3597 3500 \n",
"L 2059 -325 \n",
"z\n",
"\" transform=\"scale(0.015625)\"/>\n",
" <path id=\"DejaVuSans-73\" d=\"M 2834 3397 \n",
"L 2834 2853 \n",
"Q 2591 2978 2328 3040 \n",
"Q 2066 3103 1784 3103 \n",
"Q 1356 3103 1142 2972 \n",
"Q 928 2841 928 2578 \n",
"Q 928 2378 1081 2264 \n",
"Q 1234 2150 1697 2047 \n",
"L 1894 2003 \n",
"Q 2506 1872 2764 1633 \n",
"Q 3022 1394 3022 966 \n",
"Q 3022 478 2636 193 \n",
"Q 2250 -91 1575 -91 \n",
"Q 1294 -91 989 -36 \n",
"Q 684 19 347 128 \n",
"L 347 722 \n",
"Q 666 556 975 473 \n",
"Q 1284 391 1588 391 \n",
"Q 1994 391 2212 530 \n",
"Q 2431 669 2431 922 \n",
"Q 2431 1156 2273 1281 \n",
"Q 2116 1406 1581 1522 \n",
"L 1381 1569 \n",
"Q 847 1681 609 1914 \n",
"Q 372 2147 372 2553 \n",
"Q 372 3047 722 3315 \n",
"Q 1072 3584 1716 3584 \n",
"Q 2034 3584 2315 3537 \n",
"Q 2597 3491 2834 3397 \n",
"z\n",
"\" transform=\"scale(0.015625)\"/>\n",
" </defs>\n",
" <use xlink:href=\"#DejaVuSans-4b\"/>\n",
" <use xlink:href=\"#DejaVuSans-65\" x=\"60.576172\"/>\n",
" <use xlink:href=\"#DejaVuSans-79\" x=\"122.099609\"/>\n",
" <use xlink:href=\"#DejaVuSans-73\" x=\"181.279297\"/>\n",
" </g>\n",
" </g>\n",
" </g>\n",
" <g id=\"matplotlib.axis_2\">\n",
" <g id=\"ytick_1\">\n",
" <g id=\"line2d_3\">\n",
" <defs>\n",
" <path id=\"m473666877a\" d=\"M 0 0 \n",
"L -3.5 0 \n",
"\" style=\"stroke: #000000; stroke-width: 0.8\"/>\n",
" </defs>\n",
" <g>\n",
" <use xlink:href=\"#m473666877a\" x=\"34.240625\" y=\"12.78\" style=\"stroke: #000000; stroke-width: 0.8\"/>\n",
" </g>\n",
" </g>\n",
" <g id=\"text_4\">\n",
" <!-- 0 -->\n",
" <g transform=\"translate(20.878125 16.579219)scale(0.1 -0.1)\">\n",
" <use xlink:href=\"#DejaVuSans-30\"/>\n",
" </g>\n",
" </g>\n",
" </g>\n",
" <g id=\"ytick_2\">\n",
" <g id=\"line2d_4\">\n",
" <g>\n",
" <use xlink:href=\"#m473666877a\" x=\"34.240625\" y=\"35.1\" style=\"stroke: #000000; stroke-width: 0.8\"/>\n",
" </g>\n",
" </g>\n",
" <g id=\"text_5\">\n",
" <!-- 2 -->\n",
" <g transform=\"translate(20.878125 38.899219)scale(0.1 -0.1)\">\n",
" <defs>\n",
" <path id=\"DejaVuSans-32\" d=\"M 1228 531 \n",
"L 3431 531 \n",
"L 3431 0 \n",
"L 469 0 \n",
"L 469 531 \n",
"Q 828 903 1448 1529 \n",
"Q 2069 2156 2228 2338 \n",
"Q 2531 2678 2651 2914 \n",
"Q 2772 3150 2772 3378 \n",
"Q 2772 3750 2511 3984 \n",
"Q 2250 4219 1831 4219 \n",
"Q 1534 4219 1204 4116 \n",
"Q 875 4013 500 3803 \n",
"L 500 4441 \n",
"Q 881 4594 1212 4672 \n",
"Q 1544 4750 1819 4750 \n",
"Q 2544 4750 2975 4387 \n",
"Q 3406 4025 3406 3419 \n",
"Q 3406 3131 3298 2873 \n",
"Q 3191 2616 2906 2266 \n",
"Q 2828 2175 2409 1742 \n",
"Q 1991 1309 1228 531 \n",
"z\n",
"\" transform=\"scale(0.015625)\"/>\n",
" </defs>\n",
" <use xlink:href=\"#DejaVuSans-32\"/>\n",
" </g>\n",
" </g>\n",
" </g>\n",
" <g id=\"ytick_3\">\n",
" <g id=\"line2d_5\">\n",
" <g>\n",
" <use xlink:href=\"#m473666877a\" x=\"34.240625\" y=\"57.42\" style=\"stroke: #000000; stroke-width: 0.8\"/>\n",
" </g>\n",
" </g>\n",
" <g id=\"text_6\">\n",
" <!-- 4 -->\n",
" <g transform=\"translate(20.878125 61.219219)scale(0.1 -0.1)\">\n",
" <defs>\n",
" <path id=\"DejaVuSans-34\" d=\"M 2419 4116 \n",
"L 825 1625 \n",
"L 2419 1625 \n",
"L 2419 4116 \n",
"z\n",
"M 2253 4666 \n",
"L 3047 4666 \n",
"L 3047 1625 \n",
"L 3713 1625 \n",
"L 3713 1100 \n",
"L 3047 1100 \n",
"L 3047 0 \n",
"L 2419 0 \n",
"L 2419 1100 \n",
"L 313 1100 \n",
"L 313 1709 \n",
"L 2253 4666 \n",
"z\n",
"\" transform=\"scale(0.015625)\"/>\n",
" </defs>\n",
" <use xlink:href=\"#DejaVuSans-34\"/>\n",
" </g>\n",
" </g>\n",
" </g>\n",
" <g id=\"ytick_4\">\n",
" <g id=\"line2d_6\">\n",
" <g>\n",
" <use xlink:href=\"#m473666877a\" x=\"34.240625\" y=\"79.74\" style=\"stroke: #000000; stroke-width: 0.8\"/>\n",
" </g>\n",
" </g>\n",
" <g id=\"text_7\">\n",
" <!-- 6 -->\n",
" <g transform=\"translate(20.878125 83.539219)scale(0.1 -0.1)\">\n",
" <defs>\n",
" <path id=\"DejaVuSans-36\" d=\"M 2113 2584 \n",
"Q 1688 2584 1439 2293 \n",
"Q 1191 2003 1191 1497 \n",
"Q 1191 994 1439 701 \n",
"Q 1688 409 2113 409 \n",
"Q 2538 409 2786 701 \n",
"Q 3034 994 3034 1497 \n",
"Q 3034 2003 2786 2293 \n",
"Q 2538 2584 2113 2584 \n",
"z\n",
"M 3366 4563 \n",
"L 3366 3988 \n",
"Q 3128 4100 2886 4159 \n",
"Q 2644 4219 2406 4219 \n",
"Q 1781 4219 1451 3797 \n",
"Q 1122 3375 1075 2522 \n",
"Q 1259 2794 1537 2939 \n",
"Q 1816 3084 2150 3084 \n",
"Q 2853 3084 3261 2657 \n",
"Q 3669 2231 3669 1497 \n",
"Q 3669 778 3244 343 \n",
"Q 2819 -91 2113 -91 \n",
"Q 1303 -91 875 529 \n",
"Q 447 1150 447 2328 \n",
"Q 447 3434 972 4092 \n",
"Q 1497 4750 2381 4750 \n",
"Q 2619 4750 2861 4703 \n",
"Q 3103 4656 3366 4563 \n",
"z\n",
"\" transform=\"scale(0.015625)\"/>\n",
" </defs>\n",
" <use xlink:href=\"#DejaVuSans-36\"/>\n",
" </g>\n",
" </g>\n",
" </g>\n",
" <g id=\"ytick_5\">\n",
" <g id=\"line2d_7\">\n",
" <g>\n",
" <use xlink:href=\"#m473666877a\" x=\"34.240625\" y=\"102.06\" style=\"stroke: #000000; stroke-width: 0.8\"/>\n",
" </g>\n",
" </g>\n",
" <g id=\"text_8\">\n",
" <!-- 8 -->\n",
" <g transform=\"translate(20.878125 105.859219)scale(0.1 -0.1)\">\n",
" <defs>\n",
" <path id=\"DejaVuSans-38\" d=\"M 2034 2216 \n",
"Q 1584 2216 1326 1975 \n",
"Q 1069 1734 1069 1313 \n",
"Q 1069 891 1326 650 \n",
"Q 1584 409 2034 409 \n",
"Q 2484 409 2743 651 \n",
"Q 3003 894 3003 1313 \n",
"Q 3003 1734 2745 1975 \n",
"Q 2488 2216 2034 2216 \n",
"z\n",
"M 1403 2484 \n",
"Q 997 2584 770 2862 \n",
"Q 544 3141 544 3541 \n",
"Q 544 4100 942 4425 \n",
"Q 1341 4750 2034 4750 \n",
"Q 2731 4750 3128 4425 \n",
"Q 3525 4100 3525 3541 \n",
"Q 3525 3141 3298 2862 \n",
"Q 3072 2584 2669 2484 \n",
"Q 3125 2378 3379 2068 \n",
"Q 3634 1759 3634 1313 \n",
"Q 3634 634 3220 271 \n",
"Q 2806 -91 2034 -91 \n",
"Q 1263 -91 848 271 \n",
"Q 434 634 434 1313 \n",
"Q 434 1759 690 2068 \n",
"Q 947 2378 1403 2484 \n",
"z\n",
"M 1172 3481 \n",
"Q 1172 3119 1398 2916 \n",
"Q 1625 2713 2034 2713 \n",
"Q 2441 2713 2670 2916 \n",
"Q 2900 3119 2900 3481 \n",
"Q 2900 3844 2670 4047 \n",
"Q 2441 4250 2034 4250 \n",
"Q 1625 4250 1398 4047 \n",
"Q 1172 3844 1172 3481 \n",
"z\n",
"\" transform=\"scale(0.015625)\"/>\n",
" </defs>\n",
" <use xlink:href=\"#DejaVuSans-38\"/>\n",
" </g>\n",
" </g>\n",
" </g>\n",
" <g id=\"text_9\">\n",
" <!-- Queries -->\n",
" <g transform=\"translate(14.798437 82.307031)rotate(-90)scale(0.1 -0.1)\">\n",
" <defs>\n",
" <path id=\"DejaVuSans-51\" d=\"M 2522 4238 \n",
"Q 1834 4238 1429 3725 \n",
"Q 1025 3213 1025 2328 \n",
"Q 1025 1447 1429 934 \n",
"Q 1834 422 2522 422 \n",
"Q 3209 422 3611 934 \n",
"Q 4013 1447 4013 2328 \n",
"Q 4013 3213 3611 3725 \n",
"Q 3209 4238 2522 4238 \n",
"z\n",
"M 3406 84 \n",
"L 4238 -825 \n",
"L 3475 -825 \n",
"L 2784 -78 \n",
"Q 2681 -84 2626 -87 \n",
"Q 2572 -91 2522 -91 \n",
"Q 1538 -91 948 567 \n",
"Q 359 1225 359 2328 \n",
"Q 359 3434 948 4092 \n",
"Q 1538 4750 2522 4750 \n",
"Q 3503 4750 4090 4092 \n",
"Q 4678 3434 4678 2328 \n",
"Q 4678 1516 4351 937 \n",
"Q 4025 359 3406 84 \n",
"z\n",
"\" transform=\"scale(0.015625)\"/>\n",
" <path id=\"DejaVuSans-75\" d=\"M 544 1381 \n",
"L 544 3500 \n",
"L 1119 3500 \n",
"L 1119 1403 \n",
"Q 1119 906 1312 657 \n",
"Q 1506 409 1894 409 \n",
"Q 2359 409 2629 706 \n",
"Q 2900 1003 2900 1516 \n",
"L 2900 3500 \n",
"L 3475 3500 \n",
"L 3475 0 \n",
"L 2900 0 \n",
"L 2900 538 \n",
"Q 2691 219 2414 64 \n",
"Q 2138 -91 1772 -91 \n",
"Q 1169 -91 856 284 \n",
"Q 544 659 544 1381 \n",
"z\n",
"M 1991 3584 \n",
"L 1991 3584 \n",
"z\n",
"\" transform=\"scale(0.015625)\"/>\n",
" <path id=\"DejaVuSans-72\" d=\"M 2631 2963 \n",
"Q 2534 3019 2420 3045 \n",
"Q 2306 3072 2169 3072 \n",
"Q 1681 3072 1420 2755 \n",
"Q 1159 2438 1159 1844 \n",
"L 1159 0 \n",
"L 581 0 \n",
"L 581 3500 \n",
"L 1159 3500 \n",
"L 1159 2956 \n",
"Q 1341 3275 1631 3429 \n",
"Q 1922 3584 2338 3584 \n",
"Q 2397 3584 2469 3576 \n",
"Q 2541 3569 2628 3553 \n",
"L 2631 2963 \n",
"z\n",
"\" transform=\"scale(0.015625)\"/>\n",
" <path id=\"DejaVuSans-69\" d=\"M 603 3500 \n",
"L 1178 3500 \n",
"L 1178 0 \n",
"L 603 0 \n",
"L 603 3500 \n",
"z\n",
"M 603 4863 \n",
"L 1178 4863 \n",
"L 1178 4134 \n",
"L 603 4134 \n",
"L 603 4863 \n",
"z\n",
"\" transform=\"scale(0.015625)\"/>\n",
" </defs>\n",
" <use xlink:href=\"#DejaVuSans-51\"/>\n",
" <use xlink:href=\"#DejaVuSans-75\" x=\"78.710938\"/>\n",
" <use xlink:href=\"#DejaVuSans-65\" x=\"142.089844\"/>\n",
" <use xlink:href=\"#DejaVuSans-72\" x=\"203.613281\"/>\n",
" <use xlink:href=\"#DejaVuSans-69\" x=\"244.726562\"/>\n",
" <use xlink:href=\"#DejaVuSans-65\" x=\"272.509766\"/>\n",
" <use xlink:href=\"#DejaVuSans-73\" x=\"334.033203\"/>\n",
" </g>\n",
" </g>\n",
" </g>\n",
" <g id=\"patch_3\">\n",
" <path d=\"M 34.240625 118.8 \n",
"L 34.240625 7.2 \n",
"\" style=\"fill: none; stroke: #000000; stroke-width: 0.8; stroke-linejoin: miter; stroke-linecap: square\"/>\n",
" </g>\n",
" <g id=\"patch_4\">\n",
" <path d=\"M 145.840625 118.8 \n",
"L 145.840625 7.2 \n",
"\" style=\"fill: none; stroke: #000000; stroke-width: 0.8; stroke-linejoin: miter; stroke-linecap: square\"/>\n",
" </g>\n",
" <g id=\"patch_5\">\n",
" <path d=\"M 34.240625 118.8 \n",
"L 145.840625 118.8 \n",
"\" style=\"fill: none; stroke: #000000; stroke-width: 0.8; stroke-linejoin: miter; stroke-linecap: square\"/>\n",
" </g>\n",
" <g id=\"patch_6\">\n",
" <path d=\"M 34.240625 7.2 \n",
"L 145.840625 7.2 \n",
"\" style=\"fill: none; stroke: #000000; stroke-width: 0.8; stroke-linejoin: miter; stroke-linecap: square\"/>\n",
" </g>\n",
" </g>\n",
" <g id=\"axes_2\">\n",
" <g id=\"patch_7\">\n",
" <path d=\"M 152.815625 103.77 \n",
"L 156.892625 103.77 \n",
"L 156.892625 22.23 \n",
"L 152.815625 22.23 \n",
"z\n",
"\" style=\"fill: #ffffff\"/>\n",
" </g>\n",
" <g id=\"patch_8\">\n",
" <path clip-path=\"url(#p81ac4143b7)\" style=\"fill: #ffffff; stroke: #ffffff; stroke-width: 0.01; stroke-linejoin: miter\"/>\n",
" </g>\n",
" <image xlink:href=\"data:image/png;base64,\n",
"iVBORw0KGgoAAAANSUhEUgAAAAQAAABRCAYAAAD1sgc6AAAAnklEQVR4nJ2Suw7CQAwEjZT//1QqCnR+0XKzJxmSLqPx7krJo1/Ptq/nsgzbQRVBDqBH4wCkhTtkWDhqR8OSIHjii6H/Z2jtaLD2EDpm3DCkNvldftgB0GJIqM/TBfAfE+AAJcaSpZJRg1Fs0QyCjN5BRA0gkycEmTyRFhox7simsb/blSbGDWCDcdgB4MxwGu8CWAJ4shgqJ9IiBms/jwXJt9gA8G4AAAAASUVORK5CYII=\" id=\"image52587ed27e\" transform=\"scale(1 -1)translate(0 -81)\" x=\"153\" y=\"-22\" width=\"4\" height=\"81\"/>\n",
" <g id=\"matplotlib.axis_3\">\n",
" <g id=\"ytick_6\">\n",
" <g id=\"line2d_8\">\n",
" <defs>\n",
" <path id=\"m4194a474b4\" d=\"M 0 0 \n",
"L 3.5 0 \n",
"\" style=\"stroke: #000000; stroke-width: 0.8\"/>\n",
" </defs>\n",
" <g>\n",
" <use xlink:href=\"#m4194a474b4\" x=\"156.892625\" y=\"103.77\" style=\"stroke: #000000; stroke-width: 0.8\"/>\n",
" </g>\n",
" </g>\n",
" <g id=\"text_10\">\n",
" <!-- 0.00 -->\n",
" <g transform=\"translate(163.892625 107.569219)scale(0.1 -0.1)\">\n",
" <defs>\n",
" <path id=\"DejaVuSans-2e\" d=\"M 684 794 \n",
"L 1344 794 \n",
"L 1344 0 \n",
"L 684 0 \n",
"L 684 794 \n",
"z\n",
"\" transform=\"scale(0.015625)\"/>\n",
" </defs>\n",
" <use xlink:href=\"#DejaVuSans-30\"/>\n",
" <use xlink:href=\"#DejaVuSans-2e\" x=\"63.623047\"/>\n",
" <use xlink:href=\"#DejaVuSans-30\" x=\"95.410156\"/>\n",
" <use xlink:href=\"#DejaVuSans-30\" x=\"159.033203\"/>\n",
" </g>\n",
" </g>\n",
" </g>\n",
" <g id=\"ytick_7\">\n",
" <g id=\"line2d_9\">\n",
" <g>\n",
" <use xlink:href=\"#m4194a474b4\" x=\"156.892625\" y=\"83.385\" style=\"stroke: #000000; stroke-width: 0.8\"/>\n",
" </g>\n",
" </g>\n",
" <g id=\"text_11\">\n",
" <!-- 0.25 -->\n",
" <g transform=\"translate(163.892625 87.184219)scale(0.1 -0.1)\">\n",
" <use xlink:href=\"#DejaVuSans-30\"/>\n",
" <use xlink:href=\"#DejaVuSans-2e\" x=\"63.623047\"/>\n",
" <use xlink:href=\"#DejaVuSans-32\" x=\"95.410156\"/>\n",
" <use xlink:href=\"#DejaVuSans-35\" x=\"159.033203\"/>\n",
" </g>\n",
" </g>\n",
" </g>\n",
" <g id=\"ytick_8\">\n",
" <g id=\"line2d_10\">\n",
" <g>\n",
" <use xlink:href=\"#m4194a474b4\" x=\"156.892625\" y=\"63\" style=\"stroke: #000000; stroke-width: 0.8\"/>\n",
" </g>\n",
" </g>\n",
" <g id=\"text_12\">\n",
" <!-- 0.50 -->\n",
" <g transform=\"translate(163.892625 66.799219)scale(0.1 -0.1)\">\n",
" <use xlink:href=\"#DejaVuSans-30\"/>\n",
" <use xlink:href=\"#DejaVuSans-2e\" x=\"63.623047\"/>\n",
" <use xlink:href=\"#DejaVuSans-35\" x=\"95.410156\"/>\n",
" <use xlink:href=\"#DejaVuSans-30\" x=\"159.033203\"/>\n",
" </g>\n",
" </g>\n",
" </g>\n",
" <g id=\"ytick_9\">\n",
" <g id=\"line2d_11\">\n",
" <g>\n",
" <use xlink:href=\"#m4194a474b4\" x=\"156.892625\" y=\"42.615\" style=\"stroke: #000000; stroke-width: 0.8\"/>\n",
" </g>\n",
" </g>\n",
" <g id=\"text_13\">\n",
" <!-- 0.75 -->\n",
" <g transform=\"translate(163.892625 46.414219)scale(0.1 -0.1)\">\n",
" <defs>\n",
" <path id=\"DejaVuSans-37\" d=\"M 525 4666 \n",
"L 3525 4666 \n",
"L 3525 4397 \n",
"L 1831 0 \n",
"L 1172 0 \n",
"L 2766 4134 \n",
"L 525 4134 \n",
"L 525 4666 \n",
"z\n",
"\" transform=\"scale(0.015625)\"/>\n",
" </defs>\n",
" <use xlink:href=\"#DejaVuSans-30\"/>\n",
" <use xlink:href=\"#DejaVuSans-2e\" x=\"63.623047\"/>\n",
" <use xlink:href=\"#DejaVuSans-37\" x=\"95.410156\"/>\n",
" <use xlink:href=\"#DejaVuSans-35\" x=\"159.033203\"/>\n",
" </g>\n",
" </g>\n",
" </g>\n",
" <g id=\"ytick_10\">\n",
" <g id=\"line2d_12\">\n",
" <g>\n",
" <use xlink:href=\"#m4194a474b4\" x=\"156.892625\" y=\"22.23\" style=\"stroke: #000000; stroke-width: 0.8\"/>\n",
" </g>\n",
" </g>\n",
" <g id=\"text_14\">\n",
" <!-- 1.00 -->\n",
" <g transform=\"translate(163.892625 26.029219)scale(0.1 -0.1)\">\n",
" <defs>\n",
" <path id=\"DejaVuSans-31\" d=\"M 794 531 \n",
"L 1825 531 \n",
"L 1825 4091 \n",
"L 703 3866 \n",
"L 703 4441 \n",
"L 1819 4666 \n",
"L 2450 4666 \n",
"L 2450 531 \n",
"L 3481 531 \n",
"L 3481 0 \n",
"L 794 0 \n",
"L 794 531 \n",
"z\n",
"\" transform=\"scale(0.015625)\"/>\n",
" </defs>\n",
" <use xlink:href=\"#DejaVuSans-31\"/>\n",
" <use xlink:href=\"#DejaVuSans-2e\" x=\"63.623047\"/>\n",
" <use xlink:href=\"#DejaVuSans-30\" x=\"95.410156\"/>\n",
" <use xlink:href=\"#DejaVuSans-30\" x=\"159.033203\"/>\n",
" </g>\n",
" </g>\n",
" </g>\n",
" </g>\n",
" <g id=\"LineCollection_1\"/>\n",
" <g id=\"patch_9\">\n",
" <path d=\"M 152.815625 103.77 \n",
"L 154.854125 103.77 \n",
"L 156.892625 103.77 \n",
"L 156.892625 22.23 \n",
"L 154.854125 22.23 \n",
"L 152.815625 22.23 \n",
"L 152.815625 103.77 \n",
"z\n",
"\" style=\"fill: none; stroke: #000000; stroke-width: 0.8; stroke-linejoin: miter; stroke-linecap: square\"/>\n",
" </g>\n",
" </g>\n",
" </g>\n",
" <defs>\n",
" <clipPath id=\"p3a36c213e6\">\n",
" <rect x=\"34.240625\" y=\"7.2\" width=\"111.6\" height=\"111.6\"/>\n",
" </clipPath>\n",
" <clipPath id=\"p81ac4143b7\">\n",
" <rect x=\"152.815625\" y=\"22.23\" width=\"4.077\" height=\"81.54\"/>\n",
" </clipPath>\n",
" </defs>\n",
"</svg>\n"
],
"text/plain": [
"<Figure size 180x180 with 2 Axes>"
]
},
"metadata": {
"needs_background": "light"
},
"output_type": "display_data"
}
],
"source": [
"attention_weights = torch.eye(10).reshape((1, 1, 10, 10))\n",
"show_heatmaps(attention_weights, xlabel='Keys', ylabel='Queries')"
]
},
{
"cell_type": "markdown",
"id": "0f6c23cb",
"metadata": {
"origin_pos": 9
},
"source": [
"后面的章节内容将经常调用`show_heatmaps`函数来显示注意力权重。\n",
"\n",
"## 小结\n",
"\n",
"* 人类的注意力是有限的、有价值和稀缺的资源。\n",
"* 受试者使用非自主性和自主性提示有选择性地引导注意力。前者基于突出性,后者则依赖于意识。\n",
"* 注意力机制与全连接层或者汇聚层的区别源于增加的自主提示。\n",
"* 由于包含了自主性提示,注意力机制与全连接的层或汇聚层不同。\n",
"* 注意力机制通过注意力汇聚使选择偏向于值(感官输入),其中包含查询(自主性提示)和键(非自主性提示)。键和值是成对的。\n",
"* 可视化查询和键之间的注意力权重是可行的。\n",
"\n",
"## 练习\n",
"\n",
"1. 在机器翻译中通过解码序列词元时,其自主性提示可能是什么?非自主性提示和感官输入又是什么?\n",
"1. 随机生成一个$10 \\times 10$矩阵并使用`softmax`运算来确保每行都是有效的概率分布,然后可视化输出注意力权重。\n"
]
},
{
"cell_type": "markdown",
"id": "675bab48",
"metadata": {
"origin_pos": 11,
"tab": [
"pytorch"
]
},
"source": [
"[Discussions](https://discuss.d2l.ai/t/5764)\n"
]
}
],
"metadata": {
"language_info": {
"name": "python"
},
"required_libs": []
},
"nbformat": 4,
"nbformat_minor": 5
}
File diff suppressed because it is too large Load Diff
@@ -0,0 +1,58 @@
{
"cells": [
{
"cell_type": "markdown",
"id": "e2119702",
"metadata": {
"origin_pos": 0
},
"source": [
"# 注意力机制\n",
":label:`chap_attention`\n",
"\n",
"灵长类动物的视觉系统接受了大量的感官输入,\n",
"这些感官输入远远超过了大脑能够完全处理的程度。\n",
"然而,并非所有刺激的影响都是相等的。\n",
"意识的聚集和专注使灵长类动物能够在复杂的视觉环境中将注意力引向感兴趣的物体,例如猎物和天敌。\n",
"只关注一小部分信息的能力对进化更加有意义,使人类得以生存和成功。\n",
"\n",
"自19世纪以来,科学家们一直致力于研究认知神经科学领域的注意力。\n",
"本章的很多章节将涉及到一些研究。\n",
"\n",
"首先回顾一个经典注意力框架,解释如何在视觉场景中展开注意力。\n",
"受此框架中的*注意力提示*attention cues)的启发,\n",
"我们将设计能够利用这些注意力提示的模型。\n",
"1964年的Nadaraya-Waston核回归(kernel regression)正是具有\n",
"*注意力机制*attention mechanism)的机器学习的简单演示。\n",
"\n",
"然后继续介绍的是注意力函数,它们在深度学习的注意力模型设计中被广泛使用。\n",
"具体来说,我们将展示如何使用这些函数来设计*Bahdanau注意力*。\n",
"Bahdanau注意力是深度学习中的具有突破性价值的注意力模型,它双向对齐并且可以微分。\n",
"\n",
"最后将描述仅仅基于注意力机制的*Transformer*架构,\n",
"该架构中使用了*多头注意力*multi-head attention\n",
"和*自注意力*self-attention)。\n",
"自2017年横空出世,Transformer一直都普遍存在于现代的深度学习应用中,\n",
"例如语言、视觉、语音和强化学习领域。\n",
"\n",
":begin_tab:toc\n",
" - [attention-cues](attention-cues.ipynb)\n",
" - [nadaraya-waston](nadaraya-waston.ipynb)\n",
" - [attention-scoring-functions](attention-scoring-functions.ipynb)\n",
" - [bahdanau-attention](bahdanau-attention.ipynb)\n",
" - [multihead-attention](multihead-attention.ipynb)\n",
" - [self-attention-and-positional-encoding](self-attention-and-positional-encoding.ipynb)\n",
" - [transformer](transformer.ipynb)\n",
":end_tab:\n"
]
}
],
"metadata": {
"language_info": {
"name": "python"
},
"required_libs": []
},
"nbformat": 4,
"nbformat_minor": 5
}
@@ -0,0 +1,349 @@
{
"cells": [
{
"cell_type": "markdown",
"id": "36341c1f",
"metadata": {
"origin_pos": 0
},
"source": [
"# 多头注意力\n",
":label:`sec_multihead-attention`\n",
"\n",
"在实践中,当给定相同的查询、键和值的集合时,\n",
"我们希望模型可以基于相同的注意力机制学习到不同的行为,\n",
"然后将不同的行为作为知识组合起来,\n",
"捕获序列内各种范围的依赖关系\n",
"(例如,短距离依赖和长距离依赖关系)。\n",
"因此,允许注意力机制组合使用查询、键和值的不同\n",
"*子空间表示*representation subspaces)可能是有益的。\n",
"\n",
"为此,与其只使用单独一个注意力汇聚,\n",
"我们可以用独立学习得到的$h$组不同的\n",
"*线性投影*linear projections)来变换查询、键和值。\n",
"然后,这$h$组变换后的查询、键和值将并行地送到注意力汇聚中。\n",
"最后,将这$h$个注意力汇聚的输出拼接在一起,\n",
"并且通过另一个可以学习的线性投影进行变换,\n",
"以产生最终输出。\n",
"这种设计被称为*多头注意力*multihead attention\n",
" :cite:`Vaswani.Shazeer.Parmar.ea.2017`。\n",
"对于$h$个注意力汇聚输出,每一个注意力汇聚都被称作一个*头*(head)。\n",
" :numref:`fig_multi-head-attention`\n",
"展示了使用全连接层来实现可学习的线性变换的多头注意力。\n",
"\n",
"![多头注意力:多个头连结然后线性变换](../img/multi-head-attention.svg)\n",
":label:`fig_multi-head-attention`\n",
"\n",
"## 模型\n",
"\n",
"在实现多头注意力之前,让我们用数学语言将这个模型形式化地描述出来。\n",
"给定查询$\\mathbf{q} \\in \\mathbb{R}^{d_q}$、\n",
"键$\\mathbf{k} \\in \\mathbb{R}^{d_k}$和\n",
"值$\\mathbf{v} \\in \\mathbb{R}^{d_v}$\n",
"每个注意力头$\\mathbf{h}_i$$i = 1, \\ldots, h$)的计算方法为:\n",
"\n",
"$$\\mathbf{h}_i = f(\\mathbf W_i^{(q)}\\mathbf q, \\mathbf W_i^{(k)}\\mathbf k,\\mathbf W_i^{(v)}\\mathbf v) \\in \\mathbb R^{p_v},$$\n",
"\n",
"其中,可学习的参数包括\n",
"$\\mathbf W_i^{(q)}\\in\\mathbb R^{p_q\\times d_q}$、\n",
"$\\mathbf W_i^{(k)}\\in\\mathbb R^{p_k\\times d_k}$和\n",
"$\\mathbf W_i^{(v)}\\in\\mathbb R^{p_v\\times d_v}$\n",
"以及代表注意力汇聚的函数$f$。\n",
"$f$可以是 :numref:`sec_attention-scoring-functions`中的\n",
"加性注意力和缩放点积注意力。\n",
"多头注意力的输出需要经过另一个线性转换,\n",
"它对应着$h$个头连结后的结果,因此其可学习参数是\n",
"$\\mathbf W_o\\in\\mathbb R^{p_o\\times h p_v}$\n",
"\n",
"$$\\mathbf W_o \\begin{bmatrix}\\mathbf h_1\\\\\\vdots\\\\\\mathbf h_h\\end{bmatrix} \\in \\mathbb{R}^{p_o}.$$\n",
"\n",
"基于这种设计,每个头都可能会关注输入的不同部分,\n",
"可以表示比简单加权平均值更复杂的函数。\n"
]
},
{
"cell_type": "code",
"execution_count": 1,
"id": "dc55ba33",
"metadata": {
"execution": {
"iopub.execute_input": "2023-08-18T07:01:32.189972Z",
"iopub.status.busy": "2023-08-18T07:01:32.189240Z",
"iopub.status.idle": "2023-08-18T07:01:34.516491Z",
"shell.execute_reply": "2023-08-18T07:01:34.515475Z"
},
"origin_pos": 2,
"tab": [
"pytorch"
]
},
"outputs": [],
"source": [
"import math\n",
"import torch\n",
"from torch import nn\n",
"from d2l import torch as d2l"
]
},
{
"cell_type": "markdown",
"id": "b51ca181",
"metadata": {
"origin_pos": 5
},
"source": [
"## 实现\n",
"\n",
"在实现过程中通常[**选择缩放点积注意力作为每一个注意力头**]。\n",
"为了避免计算代价和参数代价的大幅增长,\n",
"我们设定$p_q = p_k = p_v = p_o / h$。\n",
"值得注意的是,如果将查询、键和值的线性变换的输出数量设置为\n",
"$p_q h = p_k h = p_v h = p_o$\n",
"则可以并行计算$h$个头。\n",
"在下面的实现中,$p_o$是通过参数`num_hiddens`指定的。\n"
]
},
{
"cell_type": "code",
"execution_count": 2,
"id": "1bb10990",
"metadata": {
"execution": {
"iopub.execute_input": "2023-08-18T07:01:34.521491Z",
"iopub.status.busy": "2023-08-18T07:01:34.521131Z",
"iopub.status.idle": "2023-08-18T07:01:34.530492Z",
"shell.execute_reply": "2023-08-18T07:01:34.529556Z"
},
"origin_pos": 7,
"tab": [
"pytorch"
]
},
"outputs": [],
"source": [
"#@save\n",
"class MultiHeadAttention(nn.Module):\n",
" \"\"\"多头注意力\"\"\"\n",
" def __init__(self, key_size, query_size, value_size, num_hiddens,\n",
" num_heads, dropout, bias=False, **kwargs):\n",
" super(MultiHeadAttention, self).__init__(**kwargs)\n",
" self.num_heads = num_heads\n",
" self.attention = d2l.DotProductAttention(dropout)\n",
" self.W_q = nn.Linear(query_size, num_hiddens, bias=bias)\n",
" self.W_k = nn.Linear(key_size, num_hiddens, bias=bias)\n",
" self.W_v = nn.Linear(value_size, num_hiddens, bias=bias)\n",
" self.W_o = nn.Linear(num_hiddens, num_hiddens, bias=bias)\n",
"\n",
" def forward(self, queries, keys, values, valid_lens):\n",
" # querieskeysvalues的形状:\n",
" # (batch_size,查询或者“键-值”对的个数,num_hiddens)\n",
" # valid_lens 的形状:\n",
" # (batch_size)或(batch_size,查询的个数)\n",
" # 经过变换后,输出的querieskeysvalues 的形状:\n",
" # (batch_size*num_heads,查询或者“键-值”对的个数,\n",
" # num_hiddens/num_heads)\n",
" queries = transpose_qkv(self.W_q(queries), self.num_heads)\n",
" keys = transpose_qkv(self.W_k(keys), self.num_heads)\n",
" values = transpose_qkv(self.W_v(values), self.num_heads)\n",
"\n",
" if valid_lens is not None:\n",
" # 在轴0,将第一项(标量或者矢量)复制num_heads次,\n",
" # 然后如此复制第二项,然后诸如此类。\n",
" valid_lens = torch.repeat_interleave(\n",
" valid_lens, repeats=self.num_heads, dim=0)\n",
"\n",
" # output的形状:(batch_size*num_heads,查询的个数,\n",
" # num_hiddens/num_heads)\n",
" output = self.attention(queries, keys, values, valid_lens)\n",
"\n",
" # output_concat的形状:(batch_size,查询的个数,num_hiddens)\n",
" output_concat = transpose_output(output, self.num_heads)\n",
" return self.W_o(output_concat)"
]
},
{
"cell_type": "markdown",
"id": "9ab1c33b",
"metadata": {
"origin_pos": 10
},
"source": [
"为了能够[**使多个头并行计算**]\n",
"上面的`MultiHeadAttention`类将使用下面定义的两个转置函数。\n",
"具体来说,`transpose_output`函数反转了`transpose_qkv`函数的操作。\n"
]
},
{
"cell_type": "code",
"execution_count": 3,
"id": "b2af5ed8",
"metadata": {
"execution": {
"iopub.execute_input": "2023-08-18T07:01:34.534820Z",
"iopub.status.busy": "2023-08-18T07:01:34.534308Z",
"iopub.status.idle": "2023-08-18T07:01:34.540852Z",
"shell.execute_reply": "2023-08-18T07:01:34.539927Z"
},
"origin_pos": 12,
"tab": [
"pytorch"
]
},
"outputs": [],
"source": [
"#@save\n",
"def transpose_qkv(X, num_heads):\n",
" \"\"\"为了多注意力头的并行计算而变换形状\"\"\"\n",
" # 输入X的形状:(batch_size,查询或者“键-值”对的个数,num_hiddens)\n",
" # 输出X的形状:(batch_size,查询或者“键-值”对的个数,num_heads\n",
" # num_hiddens/num_heads)\n",
" X = X.reshape(X.shape[0], X.shape[1], num_heads, -1)\n",
"\n",
" # 输出X的形状:(batch_sizenum_heads,查询或者“键-值”对的个数,\n",
" # num_hiddens/num_heads)\n",
" X = X.permute(0, 2, 1, 3)\n",
"\n",
" # 最终输出的形状:(batch_size*num_heads,查询或者“键-值”对的个数,\n",
" # num_hiddens/num_heads)\n",
" return X.reshape(-1, X.shape[2], X.shape[3])\n",
"\n",
"\n",
"#@save\n",
"def transpose_output(X, num_heads):\n",
" \"\"\"逆转transpose_qkv函数的操作\"\"\"\n",
" X = X.reshape(-1, num_heads, X.shape[1], X.shape[2])\n",
" X = X.permute(0, 2, 1, 3)\n",
" return X.reshape(X.shape[0], X.shape[1], -1)"
]
},
{
"cell_type": "markdown",
"id": "0e31b376",
"metadata": {
"origin_pos": 15
},
"source": [
"下面使用键和值相同的小例子来[**测试**]我们编写的`MultiHeadAttention`类。\n",
"多头注意力输出的形状是(`batch_size``num_queries``num_hiddens`)。\n"
]
},
{
"cell_type": "code",
"execution_count": 4,
"id": "d06baadf",
"metadata": {
"execution": {
"iopub.execute_input": "2023-08-18T07:01:34.545405Z",
"iopub.status.busy": "2023-08-18T07:01:34.544605Z",
"iopub.status.idle": "2023-08-18T07:01:34.571251Z",
"shell.execute_reply": "2023-08-18T07:01:34.570476Z"
},
"origin_pos": 17,
"tab": [
"pytorch"
]
},
"outputs": [
{
"data": {
"text/plain": [
"MultiHeadAttention(\n",
" (attention): DotProductAttention(\n",
" (dropout): Dropout(p=0.5, inplace=False)\n",
" )\n",
" (W_q): Linear(in_features=100, out_features=100, bias=False)\n",
" (W_k): Linear(in_features=100, out_features=100, bias=False)\n",
" (W_v): Linear(in_features=100, out_features=100, bias=False)\n",
" (W_o): Linear(in_features=100, out_features=100, bias=False)\n",
")"
]
},
"execution_count": 4,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"num_hiddens, num_heads = 100, 5\n",
"attention = MultiHeadAttention(num_hiddens, num_hiddens, num_hiddens,\n",
" num_hiddens, num_heads, 0.5)\n",
"attention.eval()"
]
},
{
"cell_type": "code",
"execution_count": 5,
"id": "8da65afc",
"metadata": {
"execution": {
"iopub.execute_input": "2023-08-18T07:01:34.574642Z",
"iopub.status.busy": "2023-08-18T07:01:34.574021Z",
"iopub.status.idle": "2023-08-18T07:01:34.588848Z",
"shell.execute_reply": "2023-08-18T07:01:34.587945Z"
},
"origin_pos": 20,
"tab": [
"pytorch"
]
},
"outputs": [
{
"data": {
"text/plain": [
"torch.Size([2, 4, 100])"
]
},
"execution_count": 5,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"batch_size, num_queries = 2, 4\n",
"num_kvpairs, valid_lens = 6, torch.tensor([3, 2])\n",
"X = torch.ones((batch_size, num_queries, num_hiddens))\n",
"Y = torch.ones((batch_size, num_kvpairs, num_hiddens))\n",
"attention(X, Y, Y, valid_lens).shape"
]
},
{
"cell_type": "markdown",
"id": "c228d916",
"metadata": {
"origin_pos": 22
},
"source": [
"## 小结\n",
"\n",
"* 多头注意力融合了来自于多个注意力汇聚的不同知识,这些知识的不同来源于相同的查询、键和值的不同的子空间表示。\n",
"* 基于适当的张量操作,可以实现多头注意力的并行计算。\n",
"\n",
"## 练习\n",
"\n",
"1. 分别可视化这个实验中的多个头的注意力权重。\n",
"1. 假设有一个完成训练的基于多头注意力的模型,现在希望修剪最不重要的注意力头以提高预测速度。如何设计实验来衡量注意力头的重要性呢?\n"
]
},
{
"cell_type": "markdown",
"id": "bfae5c77",
"metadata": {
"origin_pos": 24,
"tab": [
"pytorch"
]
},
"source": [
"[Discussions](https://discuss.d2l.ai/t/5758)\n"
]
}
],
"metadata": {
"language_info": {
"name": "python"
},
"required_libs": []
},
"nbformat": 4,
"nbformat_minor": 5
}
File diff suppressed because one or more lines are too long
File diff suppressed because one or more lines are too long
File diff suppressed because it is too large Load Diff
@@ -0,0 +1,314 @@
{
"cells": [
{
"cell_type": "markdown",
"id": "77479af3",
"metadata": {
"origin_pos": 0
},
"source": [
"# 异步计算\n",
":label:`sec_async`\n",
"\n",
"今天的计算机是高度并行的系统,由多个CPU核、多个GPU、多个处理单元组成。通常每个CPU核有多个线程,每个设备通常有多个GPU,每个GPU有多个处理单元。总之,我们可以同时处理许多不同的事情,并且通常是在不同的设备上。不幸的是,Python并不善于编写并行和异步代码,至少在没有额外帮助的情况下不是好选择。归根结底,Python是单线程的,将来也是不太可能改变的。因此在诸多的深度学习框架中,MXNet和TensorFlow之类则采用了一种*异步编程*asynchronous programming)模型来提高性能,而PyTorch则使用了Python自己的调度器来实现不同的性能权衡。对PyTorch来说GPU操作在默认情况下是异步的。当调用一个使用GPU的函数时,操作会排队到特定的设备上,但不一定要等到以后才执行。这允许我们并行执行更多的计算,包括在CPU或其他GPU上的操作。\n",
"\n",
"因此,了解异步编程是如何工作的,通过主动地减少计算需求和相互依赖,有助于我们开发更高效的程序。这能够减少内存开销并提高处理器利用率。\n"
]
},
{
"cell_type": "code",
"execution_count": 1,
"id": "66ebecda",
"metadata": {
"execution": {
"iopub.execute_input": "2023-08-18T07:29:23.676819Z",
"iopub.status.busy": "2023-08-18T07:29:23.676275Z",
"iopub.status.idle": "2023-08-18T07:29:26.719058Z",
"shell.execute_reply": "2023-08-18T07:29:26.717749Z"
},
"origin_pos": 2,
"tab": [
"pytorch"
]
},
"outputs": [],
"source": [
"import os\n",
"import subprocess\n",
"import numpy\n",
"import torch\n",
"from torch import nn\n",
"from d2l import torch as d2l"
]
},
{
"cell_type": "markdown",
"id": "86ca52da",
"metadata": {
"origin_pos": 4
},
"source": [
"## 通过后端异步处理\n"
]
},
{
"cell_type": "markdown",
"id": "0fbdcd2b",
"metadata": {
"origin_pos": 6,
"tab": [
"pytorch"
]
},
"source": [
"作为热身,考虑一个简单问题:生成一个随机矩阵并将其相乘。让我们在NumPy和PyTorch张量中都这样做,看看它们的区别。请注意,PyTorch的`tensor`是在GPU上定义的。\n"
]
},
{
"cell_type": "code",
"execution_count": 2,
"id": "e4c20b11",
"metadata": {
"execution": {
"iopub.execute_input": "2023-08-18T07:29:26.723694Z",
"iopub.status.busy": "2023-08-18T07:29:26.723007Z",
"iopub.status.idle": "2023-08-18T07:29:29.882717Z",
"shell.execute_reply": "2023-08-18T07:29:29.881143Z"
},
"origin_pos": 9,
"tab": [
"pytorch"
]
},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"numpy: 1.0704 sec\n",
"torch: 0.0013 sec\n"
]
}
],
"source": [
"# GPU计算热身\n",
"device = d2l.try_gpu()\n",
"a = torch.randn(size=(1000, 1000), device=device)\n",
"b = torch.mm(a, a)\n",
"\n",
"with d2l.Benchmark('numpy'):\n",
" for _ in range(10):\n",
" a = numpy.random.normal(size=(1000, 1000))\n",
" b = numpy.dot(a, a)\n",
"\n",
"with d2l.Benchmark('torch'):\n",
" for _ in range(10):\n",
" a = torch.randn(size=(1000, 1000), device=device)\n",
" b = torch.mm(a, a)"
]
},
{
"cell_type": "markdown",
"id": "c8188fde",
"metadata": {
"origin_pos": 12,
"tab": [
"pytorch"
]
},
"source": [
"通过PyTorch的基准输出比较快了几个数量级。NumPy点积是在CPU上执行的,而PyTorch矩阵乘法是在GPU上执行的,后者的速度要快得多。但巨大的时间差距表明一定还有其他原因。默认情况下,GPU操作在PyTorch中是异步的。强制PyTorch在返回之前完成所有计算,这种强制说明了之前发生的情况:计算是由后端执行,而前端将控制权返回给了Python。\n"
]
},
{
"cell_type": "code",
"execution_count": 3,
"id": "78106858",
"metadata": {
"execution": {
"iopub.execute_input": "2023-08-18T07:29:29.891458Z",
"iopub.status.busy": "2023-08-18T07:29:29.890289Z",
"iopub.status.idle": "2023-08-18T07:29:29.904366Z",
"shell.execute_reply": "2023-08-18T07:29:29.902435Z"
},
"origin_pos": 15,
"tab": [
"pytorch"
]
},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"Done: 0.0049 sec\n"
]
}
],
"source": [
"with d2l.Benchmark():\n",
" for _ in range(10):\n",
" a = torch.randn(size=(1000, 1000), device=device)\n",
" b = torch.mm(a, a)\n",
" torch.cuda.synchronize(device)"
]
},
{
"cell_type": "markdown",
"id": "eb45905d",
"metadata": {
"origin_pos": 18,
"tab": [
"pytorch"
]
},
"source": [
"广义上说,PyTorch有一个用于与用户直接交互的前端(例如通过Python),还有一个由系统用来执行计算的后端。如 :numref:`fig_frontends`所示,用户可以用各种前端语言编写PyTorch程序,如Python和C++。不管使用的前端编程语言是什么,PyTorch程序的执行主要发生在C++实现的后端。由前端语言发出的操作被传递到后端执行。后端管理自己的线程,这些线程不断收集和执行排队的任务。请注意,要使其工作,后端必须能够跟踪计算图中各个步骤之间的依赖关系。因此,不可能并行化相互依赖的操作。\n"
]
},
{
"cell_type": "markdown",
"id": "751ec224",
"metadata": {
"origin_pos": 20
},
"source": [
"![编程语言前端和深度学习框架后端](../img/frontends.png)\n",
":width:`300px`\n",
":label:`fig_frontends`\n",
"\n",
"接下来看看另一个简单例子,以便更好地理解依赖关系图。\n"
]
},
{
"cell_type": "code",
"execution_count": 4,
"id": "e4b981d5",
"metadata": {
"execution": {
"iopub.execute_input": "2023-08-18T07:29:29.910704Z",
"iopub.status.busy": "2023-08-18T07:29:29.910033Z",
"iopub.status.idle": "2023-08-18T07:29:29.963733Z",
"shell.execute_reply": "2023-08-18T07:29:29.962149Z"
},
"origin_pos": 22,
"tab": [
"pytorch"
]
},
"outputs": [
{
"data": {
"text/plain": [
"tensor([[3., 3.]], device='cuda:0')"
]
},
"execution_count": 4,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"x = torch.ones((1, 2), device=device)\n",
"y = torch.ones((1, 2), device=device)\n",
"z = x * y + 2\n",
"z"
]
},
{
"cell_type": "markdown",
"id": "47f52925",
"metadata": {
"origin_pos": 24
},
"source": [
"![后端跟踪计算图中各个步骤之间的依赖关系](../img/asyncgraph.svg)\n",
":label:`fig_asyncgraph`\n",
"\n",
"上面的代码片段在 :numref:`fig_asyncgraph`中进行了说明。每当Python前端线程执行前三条语句中的一条语句时,它只是将任务返回到后端队列。当最后一个语句的结果需要被打印出来时,Python前端线程将等待C++后端线程完成变量`z`的结果计算。这种设计的一个好处是Python前端线程不需要执行实际的计算。因此,不管Python的性能如何,对程序的整体性能几乎没有影响。 :numref:`fig_threading`演示了前端和后端如何交互。\n",
"\n",
"![前端和后端的交互](../img/threading.svg)\n",
":label:`fig_threading`\n",
"\n",
"## 障碍器与阻塞器\n"
]
},
{
"cell_type": "markdown",
"id": "8c107794",
"metadata": {
"origin_pos": 29
},
"source": [
"## 改进计算\n"
]
},
{
"cell_type": "markdown",
"id": "185ccf3b",
"metadata": {
"origin_pos": 32
},
"source": [
"Python前端线程和C++后端线程之间的简化交互可以概括如下:\n",
"\n",
"1. 前端命令后端将计算任务`y = x + 1`插入队列;\n",
"1. 然后后端从队列接收计算任务并执行;\n",
"1. 然后后端将计算结果返回到前端。\n",
"\n",
"假设这三个阶段的持续时间分别为$t_1, t_2, t_3$。如果不使用异步编程,执行10000次计算所需的总时间约为$10000 (t_1+ t_2 + t_3)$。如果使用异步编程,因为前端不必等待后端为每个循环返回计算结果,执行$10000$次计算所花费的总时间可以减少到$t_1 + 10000 t_2 + t_3$(假设$10000 t_2 > 9999t_1$)。\n",
"\n",
"\n",
"## 小结\n",
"\n",
"* 深度学习框架可以将Python前端的控制与后端的执行解耦,使得命令可以快速地异步插入后端、并行执行。\n",
"* 异步产生了一个相当灵活的前端,但请注意:过度填充任务队列可能会导致内存消耗过多。建议对每个小批量进行同步,以保持前端和后端大致同步。\n",
"* 芯片供应商提供了复杂的性能分析工具,以获得对深度学习效率更精确的洞察。\n"
]
},
{
"cell_type": "markdown",
"id": "8a353540",
"metadata": {
"origin_pos": 34
},
"source": [
"## 练习\n"
]
},
{
"cell_type": "markdown",
"id": "1989c931",
"metadata": {
"origin_pos": 36,
"tab": [
"pytorch"
]
},
"source": [
"1. 在CPU上,对本节中相同的矩阵乘法操作进行基准测试,仍然可以通过后端观察异步吗?\n"
]
},
{
"cell_type": "markdown",
"id": "85daf71a",
"metadata": {
"origin_pos": 39,
"tab": [
"pytorch"
]
},
"source": [
"[Discussions](https://discuss.d2l.ai/t/2791)\n"
]
}
],
"metadata": {
"language_info": {
"name": "python"
},
"required_libs": []
},
"nbformat": 4,
"nbformat_minor": 5
}
@@ -0,0 +1,340 @@
{
"cells": [
{
"cell_type": "markdown",
"id": "7a4cb8fa",
"metadata": {
"origin_pos": 0
},
"source": [
"# 自动并行\n",
":label:`sec_auto_para`\n",
"\n",
"深度学习框架(例如,MxNet、飞桨和PyTorch)会在后端自动构建计算图。利用计算图,系统可以了解所有依赖关系,并且可以选择性地并行执行多个不相互依赖的任务以提高速度。例如, :numref:`sec_async`中的 :numref:`fig_asyncgraph`独立初始化两个变量。因此,系统可以选择并行执行它们。\n",
"\n",
"通常情况下单个操作符将使用所有CPU或单个GPU上的所有计算资源。例如,即使在一台机器上有多个CPU处理器,`dot`操作符也将使用所有CPU上的所有核心(和线程)。这样的行为同样适用于单个GPU。因此,并行化对单设备计算机来说并不是很有用,而并行化对于多个设备就很重要了。虽然并行化通常应用在多个GPU之间,但增加本地CPU以后还将提高少许性能。例如, :cite:`Hadjis.Zhang.Mitliagkas.ea.2016`则把结合GPU和CPU的训练应用到计算机视觉模型中。借助自动并行化框架的便利性,我们可以依靠几行Python代码实现相同的目标。对自动并行计算的讨论主要集中在使用CPU和GPU的并行计算上,以及计算和通信的并行化内容。\n",
"\n",
"请注意,本节中的实验至少需要两个GPU来运行。\n"
]
},
{
"cell_type": "code",
"execution_count": 1,
"id": "8c944f1a",
"metadata": {
"execution": {
"iopub.execute_input": "2023-08-18T07:11:59.505418Z",
"iopub.status.busy": "2023-08-18T07:11:59.504686Z",
"iopub.status.idle": "2023-08-18T07:12:02.958789Z",
"shell.execute_reply": "2023-08-18T07:12:02.957933Z"
},
"origin_pos": 2,
"tab": [
"pytorch"
]
},
"outputs": [],
"source": [
"import torch\n",
"from d2l import torch as d2l"
]
},
{
"cell_type": "markdown",
"id": "4c8e7569",
"metadata": {
"origin_pos": 4
},
"source": [
"## 基于GPU的并行计算\n",
"\n",
"从定义一个具有参考性的用于测试的工作负载开始:下面的`run`函数将执行$10$次*矩阵-矩阵*乘法时需要使用的数据分配到两个变量(`x_gpu1`和`x_gpu2`)中,这两个变量分别位于选择的不同设备上。\n"
]
},
{
"cell_type": "code",
"execution_count": 2,
"id": "5e7b039a",
"metadata": {
"execution": {
"iopub.execute_input": "2023-08-18T07:12:02.987012Z",
"iopub.status.busy": "2023-08-18T07:12:02.986327Z",
"iopub.status.idle": "2023-08-18T07:12:05.221346Z",
"shell.execute_reply": "2023-08-18T07:12:05.220262Z"
},
"origin_pos": 6,
"tab": [
"pytorch"
]
},
"outputs": [],
"source": [
"devices = d2l.try_all_gpus()\n",
"def run(x):\n",
" return [x.mm(x) for _ in range(50)]\n",
"\n",
"x_gpu1 = torch.rand(size=(4000, 4000), device=devices[0])\n",
"x_gpu2 = torch.rand(size=(4000, 4000), device=devices[1])"
]
},
{
"cell_type": "markdown",
"id": "c2f2ffe6",
"metadata": {
"origin_pos": 9,
"tab": [
"pytorch"
]
},
"source": [
"现在使用函数来处理数据。通过在测量之前需要预热设备(对设备执行一次传递)来确保缓存的作用不影响最终的结果。`torch.cuda.synchronize()`函数将会等待一个CUDA设备上的所有流中的所有核心的计算完成。函数接受一个`device`参数,代表是哪个设备需要同步。如果device参数是`None`(默认值),它将使用`current_device()`找出的当前设备。\n"
]
},
{
"cell_type": "code",
"execution_count": 3,
"id": "970d8c24",
"metadata": {
"execution": {
"iopub.execute_input": "2023-08-18T07:12:05.225646Z",
"iopub.status.busy": "2023-08-18T07:12:05.224864Z",
"iopub.status.idle": "2023-08-18T07:12:07.664593Z",
"shell.execute_reply": "2023-08-18T07:12:07.663740Z"
},
"origin_pos": 12,
"tab": [
"pytorch"
]
},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"GPU1 time: 0.4600 sec\n"
]
},
{
"name": "stdout",
"output_type": "stream",
"text": [
"GPU2 time: 0.4706 sec\n"
]
}
],
"source": [
"run(x_gpu1)\n",
"run(x_gpu2) # 预热设备\n",
"torch.cuda.synchronize(devices[0])\n",
"torch.cuda.synchronize(devices[1])\n",
"\n",
"with d2l.Benchmark('GPU1 time'):\n",
" run(x_gpu1)\n",
" torch.cuda.synchronize(devices[0])\n",
"\n",
"with d2l.Benchmark('GPU2 time'):\n",
" run(x_gpu2)\n",
" torch.cuda.synchronize(devices[1])"
]
},
{
"cell_type": "markdown",
"id": "4df4f720",
"metadata": {
"origin_pos": 15,
"tab": [
"pytorch"
]
},
"source": [
"如果删除两个任务之间的`synchronize`语句,系统就可以在两个设备上自动实现并行计算。\n"
]
},
{
"cell_type": "code",
"execution_count": 4,
"id": "d6a567e4",
"metadata": {
"execution": {
"iopub.execute_input": "2023-08-18T07:12:07.668313Z",
"iopub.status.busy": "2023-08-18T07:12:07.667763Z",
"iopub.status.idle": "2023-08-18T07:12:08.130167Z",
"shell.execute_reply": "2023-08-18T07:12:08.129377Z"
},
"origin_pos": 18,
"tab": [
"pytorch"
]
},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"GPU1 & GPU2: 0.4580 sec\n"
]
}
],
"source": [
"with d2l.Benchmark('GPU1 & GPU2'):\n",
" run(x_gpu1)\n",
" run(x_gpu2)\n",
" torch.cuda.synchronize()"
]
},
{
"cell_type": "markdown",
"id": "a04f1ffe",
"metadata": {
"origin_pos": 20
},
"source": [
"在上述情况下,总执行时间小于两个部分执行时间的总和,因为深度学习框架自动调度两个GPU设备上的计算,而不需要用户编写复杂的代码。\n",
"\n",
"## 并行计算与通信\n",
"\n",
"在许多情况下,我们需要在不同的设备之间移动数据,比如在CPU和GPU之间,或者在不同的GPU之间。例如,当执行分布式优化时,就需要移动数据来聚合多个加速卡上的梯度。让我们通过在GPU上计算,然后将结果复制回CPU来模拟这个过程。\n"
]
},
{
"cell_type": "code",
"execution_count": 5,
"id": "3b71f533",
"metadata": {
"execution": {
"iopub.execute_input": "2023-08-18T07:12:08.133753Z",
"iopub.status.busy": "2023-08-18T07:12:08.133184Z",
"iopub.status.idle": "2023-08-18T07:12:10.950227Z",
"shell.execute_reply": "2023-08-18T07:12:10.949308Z"
},
"origin_pos": 22,
"tab": [
"pytorch"
]
},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"在GPU1上运行: 0.4608 sec\n"
]
},
{
"name": "stdout",
"output_type": "stream",
"text": [
"复制到CPU: 2.3504 sec\n"
]
}
],
"source": [
"def copy_to_cpu(x, non_blocking=False):\n",
" return [y.to('cpu', non_blocking=non_blocking) for y in x]\n",
"\n",
"with d2l.Benchmark('在GPU1上运行'):\n",
" y = run(x_gpu1)\n",
" torch.cuda.synchronize()\n",
"\n",
"with d2l.Benchmark('复制到CPU'):\n",
" y_cpu = copy_to_cpu(y)\n",
" torch.cuda.synchronize()"
]
},
{
"cell_type": "markdown",
"id": "5290ab0c",
"metadata": {
"origin_pos": 25,
"tab": [
"pytorch"
]
},
"source": [
"这种方式效率不高。注意到当列表中的其余部分还在计算时,我们可能就已经开始将`y`的部分复制到CPU了。例如,当计算一个小批量的(反传)梯度时。某些参数的梯度将比其他参数的梯度更早可用。因此,在GPU仍在运行时就开始使用PCI-Express总线带宽来移动数据是有利的。在PyTorch中,`to()`和`copy_()`等函数都允许显式的`non_blocking`参数,这允许在不需要同步时调用方可以绕过同步。设置`non_blocking=True`以模拟这个场景。\n"
]
},
{
"cell_type": "code",
"execution_count": 6,
"id": "b6ecdc54",
"metadata": {
"execution": {
"iopub.execute_input": "2023-08-18T07:12:10.954084Z",
"iopub.status.busy": "2023-08-18T07:12:10.953336Z",
"iopub.status.idle": "2023-08-18T07:12:12.728692Z",
"shell.execute_reply": "2023-08-18T07:12:12.727837Z"
},
"origin_pos": 28,
"tab": [
"pytorch"
]
},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"在GPU1上运行并复制到CPU: 1.7703 sec\n"
]
}
],
"source": [
"with d2l.Benchmark('在GPU1上运行并复制到CPU'):\n",
" y = run(x_gpu1)\n",
" y_cpu = copy_to_cpu(y, True)\n",
" torch.cuda.synchronize()"
]
},
{
"cell_type": "markdown",
"id": "58a269e8",
"metadata": {
"origin_pos": 30
},
"source": [
"两个操作所需的总时间少于它们各部分操作所需时间的总和。请注意,与并行计算的区别是通信操作使用的资源:CPU和GPU之间的总线。事实上,我们可以在两个设备上同时进行计算和通信。如上所述,计算和通信之间存在的依赖关系是必须先计算`y[i]`,然后才能将其复制到CPU。幸运的是,系统可以在计算`y[i]`的同时复制`y[i-1]`,以减少总的运行时间。\n",
"\n",
"最后,本节给出了一个简单的两层多层感知机在CPU和两个GPU上训练时的计算图及其依赖关系的例子,如 :numref:`fig_twogpu`所示。手动调度由此产生的并行程序将是相当痛苦的。这就是基于图的计算后端进行优化的优势所在。\n",
"\n",
"![在一个CPU和两个GPU上的两层的多层感知机的计算图及其依赖关系](../img/twogpu.svg)\n",
":label:`fig_twogpu`\n",
"\n",
"## 小结\n",
"\n",
"* 现代系统拥有多种设备,如多个GPU和多个CPU,还可以并行地、异步地使用它们。\n",
"* 现代系统还拥有各种通信资源,如PCI Express、存储(通常是固态硬盘或网络存储)和网络带宽,为了达到最高效率可以并行使用它们。\n",
"* 后端可以通过自动化地并行计算和通信来提高性能。\n",
"\n",
"## 练习\n",
"\n",
"1. 在本节定义的`run`函数中执行了八个操作,并且操作之间没有依赖关系。设计一个实验,看看深度学习框架是否会自动地并行地执行它们。\n",
"1. 当单个操作符的工作量足够小,即使在单个CPU或GPU上,并行化也会有所帮助。设计一个实验来验证这一点。\n",
"1. 设计一个实验,在CPU和GPU这两种设备上使用并行计算和通信。\n",
"1. 使用诸如NVIDIA的[Nsight](https://developer.nvidia.com/nsight-compute-2019_5)之类的调试器来验证代码是否有效。\n",
"1. 设计并实验具有更加复杂的数据依赖关系的计算任务,以查看是否可以在提高性能的同时获得正确的结果。\n"
]
},
{
"cell_type": "markdown",
"id": "88f15d8c",
"metadata": {
"origin_pos": 32,
"tab": [
"pytorch"
]
},
"source": [
"[Discussions](https://discuss.d2l.ai/t/2794)\n"
]
}
],
"metadata": {
"language_info": {
"name": "python"
},
"required_libs": []
},
"nbformat": 4,
"nbformat_minor": 5
}
@@ -0,0 +1,238 @@
{
"cells": [
{
"cell_type": "markdown",
"id": "7ac11a89",
"metadata": {
"origin_pos": 0
},
"source": [
"# 硬件\n",
":label:`sec_hardware`\n",
"\n",
"很好地理解算法和模型才可以捕获统计方面的问题,构建出具有出色性能的系统。同时,至少对底层硬件有一定的了解也是必不可少的。本节不能替代硬件和系统设计的相关课程。相反,本节的内容可以作为理解某些算法为什么比其他算法更高效以及如何实现良好吞吐量的起点。一个好的设计可以很容易地在性能上造就数量级的差异,这也是后续产生的能够训练网络(例如,训练时间为$1$周)和无法训练网络(训练时间为$3$个月,导致错过截止期)之间的差异。我们先从计算机的研究开始。然后深入查看CPU和GPU。最后,再查看数据中心或云中的多台计算机的连接方式。\n",
"\n",
"![每个程序员都应该知道的延迟数字](../img/latencynumbers.png)\n",
":label:`fig_latencynumbers`\n",
"\n",
"也可以通过 :numref:`fig_latencynumbers`进行简单的了解,图片源自科林·斯科特的[互动帖子](https://people.eecs.berkeley.edu/~rcs/research/interactive_latency.html),在帖子中很好地概述了过去十年的进展。原始的数字是取自于杰夫迪恩的[Stanford讲座](https://static.googleusercontent.com/media/research.google.com/en//people/jeff/Stanford-DL-Nov-2010.pdf)。下面的讨论解释了这些数字的一些基本原理,以及它们如何指导我们去设计算法。下面的讨论是非常笼统和粗略的。很显然,它并不能代替一门完整的课程,而只是为了给统计建模者提供足够的信息,让他们做出合适的设计决策。对于计算机体系结构的深入概述,建议读者参考 :cite:`Hennessy.Patterson.2011`或关于该主题的最新课程,例如[Arste Asanovic](http://inst.eecs.berkeley.edu/~cs152/sp19/)。\n",
"\n",
"## 计算机\n",
"\n",
"大多数深度学习研究者和实践者都可以使用一台具有相当数量的内存、计算资源、某种形式的加速器(如一个或者多个GPU)的计算机。计算机由以下关键部件组成:\n",
"\n",
"* 一个处理器(也被称为CPU),它除了能够运行操作系统和许多其他功能之外,还能够执行给定的程序。它通常由$8$个或更多个核心组成;\n",
"* 内存(随机访问存储,RAM)用于存储和检索计算结果,如权重向量和激活参数,以及训练数据;\n",
"* 一个或多个以太网连接,速度从1GB/s到100GB/s不等。在高端服务器上可能用到更高级的互连;\n",
"* 高速扩展总线(PCIe)用于系统连接一个或多个GPU。服务器最多有$8$个加速卡,通常以更高级的拓扑方式连接,而桌面系统则有$1$个或$2$个加速卡,具体取决于用户的预算和电源负载的大小;\n",
"* 持久性存储设备,如磁盘驱动器、固态驱动器,在许多情况下使用高速扩展总线连接。它为系统需要的训练数据和中间检查点需要的存储提供了足够的传输速度。\n",
"\n",
"![计算机组件的连接](../img/mobo-symbol.svg)\n",
":label:`fig_mobo-symbol`\n",
"\n",
"如 :numref:`fig_mobo-symbol`所示,高速扩展总线由直接连接到CPU的多个通道组成,将CPU与大多数组件(网络、GPU和存储)连接在一起。例如,AMD的Threadripper3有$64$个PCIe4.0通道,每个通道都能够双向传输16Gbit/s的数据。内存直接连接到CPU,总带宽高达100GB/s。\n",
"\n",
"当我们在计算机上运行代码时,需要将数据转移到处理器上(CPU或GPU)执行计算,然后将结果从处理器移回到随机访问存储和持久存储器中。因此,为了获得良好的性能,需要确保每一步工作都能无缝链接,而不希望系统中的任何一部分成为主要的瓶颈。例如,如果不能快速加载图像,那么处理器就无事可做。同样地,如果不能快速移动矩阵到CPU(或GPU)上,那么CPU(或GPU)就会无法全速运行。最后,如果希望在网络上同步多台计算机,那么网络就不应该拖累计算速度。一种选择是通信和计算交错进行。接下来将详细地介绍各个组件。\n",
"\n",
"## 内存\n",
"\n",
"最基本的内存主要用于存储需要随时访问的数据。目前,CPU的内存通常为[DDR4](https://en.wikipedia.org/wiki/DDR4_SDRAM)类型,每个模块提供20-25Gb/s的带宽。每个模块都有一条$64$位宽的总线。通常使用成对的内存模块来允许多个通道。CPU有$2$到$4$个内存通道,也就是说,它们内存带宽的峰值在40GB/s到100GB/s之间。一般每个通道有两个物理存储体(bank)。例如AMD的Zen 3 Threadripper有$8$个插槽。\n",
"\n",
"虽然这些数字令人印象深刻,但实际上它们只能说明了一部分故事。当我们想要从内存中读取一部分内容时,需要先告诉内存模块在哪里可以找到信息。也就是说,我们需要先将*地址*(address)发送到RAM。然后我们可以选择只读取一条$64$位记录还是一长串记录。后者称为*突发读取*(burst read)。概括地说,向内存发送地址并设置传输大约需要100ns(细节取决于所用内存芯片的特定定时系数),每个后续传输只需要0.2ns。总之,第一次读取的成本是后续读取的500倍!请注意,每秒最多可以执行一千万次随机读取。这说明应该尽可能地避免随机内存访问,而是使用突发模式读取和写入。\n",
"\n",
"当考虑到拥有多个物理存储体时,事情就更加复杂了。每个存储体大部分时候都可以独立地读取内存。这意味着两件事。一方面,如果随机读操作均匀分布在内存中,那么有效的随机读操作次数将高达4倍。这也意味着执行随机读取仍然不是一个好主意,因为突发读取的速度也快了4倍。另一方面,由于内存对齐是$64$位边界,因此最好将任何数据结构与相同的边界对齐。当设置了适当的标志时,编译器基本上就是[自动化](https://en.wikipedia.org/wiki/Data_structure_alignment)地执行对齐操作。我们鼓励好奇的读者回顾一下[Zeshan Chishti关于DRAM的讲座](http://web.cecs.pdx.edu/~zeshan/ece585_lec5.pdf)。\n",
"\n",
"GPU内存的带宽要求甚至更高,因为它们的处理单元比CPU多得多。总的来说,解决这些问题有两种选择。首先是使内存总线变得更宽。例如,NVIDIA的RTX 2080Ti有一条352位宽的总线。这样就可以同时传输更多的信息。其次,GPU使用特定的高性能内存。消费级设备,如NVIDIA的RTX和Titan系列,通常使用[GDDR6](https://en.wikipedia.org/wiki/GDDR6_SDRAM)芯片,总带宽超过500GB/s。另一种选择是使用HBM(高带宽存储器)模块。它们使用截然不同的接口,直接与专用硅片上的GPU连接。这使得它们非常昂贵,通常仅限于高端服务器芯片,如NVIDIA Volta V100系列加速卡。毫不意外的是GPU的内存通常比CPU的内存小得多,因为前者的成本更高。就目的而言,它们的性能与特征大体上是相似的,只是GPU的速度更快。就本书而言,我们完全可以忽略细节,因为这些技术只在调整GPU核心以获得高吞吐量时才起作用。\n",
"\n",
"## 存储器\n",
"\n",
"随机访问存储的一些关键特性是 *带宽*(bandwidth)和 *延迟*(latency)。存储设备也是如此,只是不同设备之间的特性差异可能更大。\n",
"\n",
"### 硬盘驱动器\n",
"\n",
"*硬盘驱动器*hard disk drive,HDD)已经使用了半个多世纪。简单的说,它们包含许多旋转的盘片,这些盘片的磁头可以放置在任何给定的磁道上进行读写。高端磁盘在$9$个盘片上可容纳高达16TB的容量。硬盘的主要优点之一是相对便宜,而它们的众多缺点之一是典型的灾难性故障模式和相对较高的读取延迟。\n",
"\n",
"要理解后者,请了解一个事实即硬盘驱动器的转速大约为7200RPM(每分钟转数)。它们如果转速再快些,就会由于施加在碟片上的离心力而破碎。在访问磁盘上的特定扇区时,还有一个关键问题:需要等待碟片旋转到位(可以移动磁头,但是无法对磁盘加速)。因此,可能需要$8$毫秒才能使用请求的数据。一种常见的描述方式是,硬盘驱动器可以以大约100IOPs(每秒输入/输出操作)的速度工作,并且在过去二十年中这个数字基本上没变。同样糟糕的是,带宽(大约为100-200MB/s)也很难增加。毕竟,每个磁头读取一个磁道的比特,因此比特率只随信息密度的平方根缩放。因此,对于非常大的数据集,HDD正迅速降级为归档存储和低级存储。\n",
"\n",
"### 固态驱动器\n",
"\n",
"固态驱动器(solid state drives,SSD)使用闪存持久地存储信息。这允许更快地访问存储的记录。现代的固态驱动器的IOPs可以达到$10$万到$50$万,比硬盘驱动器快3个数量级。而且,它们的带宽可以达到1-3GB/s,比硬盘驱动器快一个数量级。这些改进听起来好的难以置信,而事实上受固态驱动器的设计方式,它仍然存在下面的附加条件。\n",
"\n",
"* 固态驱动器以块的方式(256KB或更大)存储信息。块只能作为一个整体来写入,因此需要耗费大量的时间,导致固态驱动器在按位随机写入时性能非常差。而且通常数据写入需要大量的时间还因为块必须被读取、擦除,然后再重新写入新的信息。如今固态驱动器的控制器和固件已经开发出了缓解这种情况的算法。尽管有了算法,写入速度仍然会比读取慢得多,特别是对于QLC(四层单元)固态驱动器。提高性能的关键是维护操作的“队列”,在队列中尽可能地优先读取和写入大的块。\n",
"* 固态驱动器中的存储单元磨损得比较快(通常在几千次写入之后就已经老化了)。磨损程度保护算法能够将退化平摊到许多单元。也就是说,不建议将固态驱动器用于交换分区文件或大型日志文件。\n",
"* 最后,带宽的大幅增加迫使计算机设计者将固态驱动器与PCIe总线相连接,这种驱动器称为NVMe(非易失性内存增强),其最多可以使用$4$个PCIe通道。在PCIe4.0上最高可达8GB/s。\n",
"\n",
"### 云存储\n",
"\n",
"云存储提供了一系列可配置的性能。也就是说,虚拟机的存储在数量和速度上都能根据用户需要进行动态分配。建议用户在延迟太高时(例如,在训练期间存在许多小记录时)增加IOPs的配置数。\n",
"\n",
"## CPU\n",
"\n",
"中央处理器(central processing unitCPU)是任何计算机的核心。它们由许多关键组件组成:*处理器核心*(processor cores)用于执行机器代码的;*总线*(bus)用于连接不同组件(注意,总线会因为处理器型号、各代产品和供应商之间的特定拓扑结构有明显不同);*缓存*(cach)相比主内存实现更高的读取带宽和更低的延迟内存访问。最后,因为高性能线性代数和卷积运算常见于媒体处理和机器学习中,所以几乎所有的现代CPU都包含*向量处理单元*vector processing unit)为这些计算提供辅助。\n",
"\n",
"![Intel Skylake消费级四核CPU](../img/skylake.svg)\n",
":label:`fig_skylake`\n",
"\n",
" :numref:`fig_skylake`描述了Intel Skylake消费级四核CPU。它包含一个集成GPU、缓存和一个连接四个核心的环总线。例如,以太网、WiFi、蓝牙、SSD控制器和USB这些外围设备要么是芯片组的一部分,要么通过PCIe直接连接到CPU。\n",
"\n",
"### 微体系结构\n",
"\n",
"每个处理器核心都由一组相当复杂的组件组成。虽然不同时代的产品和供应商的细节有所不同,但基本功能都是标准的。前端加载指令并尝试预测将采用哪条路径(例如,为了控制流),然后将指令从汇编代码解码为微指令。汇编代码通常不是处理器执行的最低级别代码,而复杂的微指令却可以被解码成一组更低级的操作,然后由实际的执行核心处理。通常执行核心能够同时执行许多操作,例如, :numref:`fig_cortexa77`的ARM Cortex A77核心可以同时执行多达$8$个操作。\n",
"\n",
"![ARM Cortex A77微体系结构](../img/a77.svg)\n",
":label:`fig_cortexa77`\n",
"\n",
"这意味着高效的程序可以在每个时钟周期内执行多条指令,前提是这些指令可以独立执行。不是所有的处理单元都是平等的。一些专用于处理整数指令,而另一些则针对浮点性能进行了优化。为了提高吞吐量,处理器还可以在分支指令中同时执行多条代码路径,然后丢弃未选择分支的结果。这就是为什么前端的分支预测单元很重要,因为只有最有希望的路径才会被继续执行。\n",
"\n",
"### 矢量化\n",
"\n",
"深度学习的计算量非常大。因此,为了满足机器学习的需要,CPU需要在一个时钟周期内执行许多操作。这种执行方式是通过向量处理单元实现的。这些处理单元有不同的名称:在ARM上叫做NEON,在x86上被称为[AVX2](https://en.wikipedia.org/wiki/Advanced_Vector_Extensions)。一个常见的功能是它们能够执行单指令多数据(single instruction multiple dataSIMD)操作。 :numref:`fig_neon128`显示了如何在ARM上的一个时钟周期中完成$8$个整数加法。\n",
"\n",
"![128位NEON矢量化](../img/neon128.svg)\n",
":label:`fig_neon128`\n",
"\n",
"根据体系结构的选择,此类寄存器最长可达$512$位,最多可组合$64$对数字。例如,我们可能会将两个数字相乘,然后与第三个数字相加,这也称为乘加融合(fused multiply-add)。Intel的[OpenVino](https://01.org/openvinotoolkit)就是使用这些处理器来获得可观的吞吐量,以便在服务器级CPU上进行深度学习。不过请注意,这个数字与GPU的能力相比则相形见绌。例如,NVIDIA的RTX 2080Ti拥有$4352$个CUDA核心,每个核心都能够在任何时候处理这样的操作。\n",
"\n",
"### 缓存\n",
"\n",
"考虑以下情况:我们有一个中等规模的$4$核心的CPU,如 :numref:`fig_skylake`所示,运行在2GHz频率。此外,假设向量处理单元启用了$256$位带宽的AVX2,其IPC(指令/时钟)计数为1。进一步假设从内存中获取用于AVX2操作的指令至少需要一个寄存器。这意味着CPU每个时钟周期需要消耗$4 \\times 256 \\text{ bit} = 128 \\text{ bytes}$的数据。除非我们能够每秒向处理器传输$2 \\times 10^9 \\times 128 = 256 \\times 10^9$字节,否则用于处理的数据将会不足。不幸的是,这种芯片的存储器接口仅支持20-40Gb/s的数据传输,即少了一个数量级。解决方法是尽可能避免从内存中加载新数据,而是将数据放在CPU的缓存上。这就是使用缓存的地方。通常使用以下名称或概念。\n",
"\n",
"* **寄存器**,严格来说不是缓存的一部分,用于帮助组织指令。也就是说,寄存器是CPU可以以时钟速度访问而没有延迟的存储位置。CPU有几十个寄存器,因此有效地使用寄存器取决于编译器(或程序员)。例如,C语言有一个`register`关键字。\n",
"* **一级缓存**是应对高内存带宽要求的第一道防线。一级缓存很小(常见的大小可能是32-64KB),内容通常分为数据和指令。当数据在一级缓存中被找到时,其访问速度非常快,如果没有在那里找到,搜索将沿着缓存层次结构向下寻找。\n",
"* **二级缓存**是下一站。根据架构设计和处理器大小的不同,它们可能是独占的也可能是共享的。即它们可能只能由给定的核心访问,或者在多个核心之间共享。二级缓存比一级缓存大(通常每个核心256-512KB),而速度也更慢。此外,我们首先需要检查以确定数据不在一级缓存中,才会访问二级缓存中的内容,这会增加少量的额外延迟。\n",
"* **三级缓存**在多个核之间共享,并且可以非常大。AMD的EPYC 3服务器的CPU在多个芯片上拥有高达256MB的高速缓存。更常见的数字在4-8MB范围内。\n",
"\n",
"预测下一步需要哪个存储设备是优化芯片设计的关键参数之一。例如,建议以*向前*的方向遍历内存,因为大多数缓存算法将试图*向前读取*(read forward)而不是向后读取。同样,将内存访问模式保持在本地也是提高性能的一个好方法。\n",
"\n",
"添加缓存是一把双刃剑。一方面,它能确保处理器核心不缺乏数据。但同时,它也增加了芯片尺寸,消耗了原本可以用来提高处理能力的面积。此外,*缓存未命中*的代价可能会很昂贵。考虑最坏的情况,如 :numref:`fig_falsesharing`所示的*错误共享*false sharing)。当处理器$1$上的线程请求数据时,内存位置缓存在处理器$0$上。为了满足获取需要,处理器$0$需要停止它正在做的事情,将信息写回主内存,然后让处理器$1$从内存中读取它。在此操作期间,两个处理器都需要等待。与高效的单处理器实现相比,这种代码在多个处理器上运行的速度可能要慢得多。这就是为什么缓存大小(除了物理大小之外)有实际限制的另一个原因。\n",
"\n",
"![错误共享(图片由英特尔提供)](../img/falsesharing.svg)\n",
":label:`fig_falsesharing`\n",
"\n",
"## GPU和其他加速卡\n",
"\n",
"毫不夸张地说,如果没有GPU,深度学习就不会成功。基于同样的原因,有理由认为GPU制造商的财富由于深度学习而显著增加。这种硬件和算法的协同进化导致了这样一种情况:无论好坏,深度学习都是更可取的统计建模范式。因此,了解GPU和其他加速卡(如TPU :cite:`Jouppi.Young.Patil.ea.2017`)的具体好处是值得的。\n",
"\n",
"值得注意的是,在实践中经常会有这样一个判别:加速卡是为训练还是推断而优化的。对于后者,我们只需要计算网络中的前向传播。而反向传播不需要存储中间数据。还有,我们可能不需要非常精确的计算(FP16或INT8通常就足够了)。对于前者,即训练过程中需要存储所有的中间结果用来计算梯度。而且,累积梯度也需要更高的精度,以避免数值下溢(或溢出)。这意味着最低要求也是FP16(或FP16与FP32的混合精度)。所有这些都需要更快、更大的内存(HBM2或者GDDR6)和更高的处理能力。例如,NVIDIA优化了[Turing](https://devblogs.nvidia.com/nvidia-turing-architecture-in-depth/) T4 GPU用于推断和V100 GPU用于训练。\n",
"\n",
"回想一下如 :numref:`fig_neon128`所示的矢量化。处理器核心中添加向量处理单元可以显著提高吞吐量。例如,在 :numref:`fig_neon128`的例子中,我们能够同时执行$16$个操作。首先,如果我们添加的运算不仅优化了向量运算,而且优化了矩阵运算,会有什么好处?稍后我们将讨论基于这个策略引入的张量核(tensor cores)。第二,如果我们增加更多的核心呢?简而言之,以上就是GPU设计决策中的两种策略。 :numref:`fig_turing_processing_block`给出了基本处理块的概述。它包含$16$个整数单位和$16$个浮点单位。除此之外,两个张量核加速了与深度学习相关的附加操作的狭窄的子集。每个流式多处理器都由这样的四个块组成。\n",
"\n",
"![NVIDIA Turing处理块(图片由英伟达提供)](../img/turing-processing-block.png)\n",
":width:`150px`\n",
":label:`fig_turing_processing_block`\n",
"\n",
"接下来,将$12$个流式多处理器分组为图形处理集群,这些集群构成了高端TU102处理器。充足的内存通道和二级缓存完善了配置。 :numref:`fig_turing`有相关的细节。设计这种设备的原因之一是可以根据需要独立地添加或删除模块,从而满足设计更紧凑的芯片和处理良品率问题(故障模块可能无法激活)的需要。幸运的是,在CUDA和框架代码层之下,这类设备的编程对深度学习的临时研究员隐藏得很好。特别是,只要有可用的资源GPU上就可以同时执行多个程序。尽管如此,了解设备的局限性是值得的,以避免对应的设备内存的型号不合适。\n",
"\n",
"![NVIDIA Turing架构(图片由英伟达提供)](../img/turing.png)\n",
":width:`350px`\n",
":label:`fig_turing`\n",
"\n",
"最后值得一提的是*张量核*(tensor core)。它们是最近增加更多优化电路趋势的一个例子,这些优化电路对深度学习特别有效。例如,TPU添加了用于快速矩阵乘法的脉动阵列 :cite:`Kung.1988`,这种设计是为了支持非常小数量(第一代TPU支持数量为1)的大型操作。而张量核是另一个极端。它们针对$4 \\times 4$和$16 \\times 16$矩阵之间的小型运算进行了优化,具体取决于它们的数值精度。 :numref:`fig_tensorcore`给出了优化的概述。\n",
"\n",
"![NVIDIA Turing架构中的张量核心(图片由英伟达提供)](../img/tensorcore.jpg)\n",
":width:`400px`\n",
":label:`fig_tensorcore`\n",
"\n",
"显然,我们最终会在优化计算时做出某些妥协。其中之一是GPU不太擅长处理稀疏数据和中断。尽管有一些明显的例外,如[Gunrock](https://github.com/gunrock/gunrock) :cite:`Wang.Davidson.Pan.ea.2016`,但GPU擅长的高带宽突发读取操作并不适合稀疏的矩阵和向量的访问模式。访问稀疏数据和处理中断这两个目标是一个积极研究的领域。例如:[DGL](http://dgl.ai),一个专为图深度学习而设计的库。\n",
"\n",
"## 网络和总线\n",
"\n",
"每当单个设备不足以进行优化时,我们就需要来回传输数据以实现同步处理,于是网络和总线就派上了用场。我们有许多设计参数:带宽、成本、距离和灵活性。应用的末端有WiFi,它有非常好的使用范围,非常容易使用(毕竟没有线缆),而且还便宜,但它提供的带宽和延迟相对一般。头脑正常的机器学习研究人员都不会用它来构建服务器集群。接下来的内容中将重点关注适合深度学习的互连方式。\n",
"\n",
"* **PCIe**,一种专用总线,用于每个通道点到点连接的高带宽需求(在$16$通道插槽中的PCIe4.0上高达32GB/s),延迟时间为个位数的微秒(5μs)。PCIe链接非常宝贵。处理器拥有的数量:AMD的EPYC 3有$128$个通道,Intel的Xeon每个芯片有$48$个通道;在桌面级CPU上,数字分别是$20$Ryzen9)和$16$Core i9)。由于GPU通常有$16$个通道,这就限制了以全带宽与CPU连接的GPU数量。毕竟,它们还需要与其他高带宽外围设备(如存储和以太网)共享链路。与RAM访问一样,由于减少了数据包的开销,因此更适合大批量数据传输。\n",
"* **以太网**,连接计算机最常用的方式。虽然它比PCIe慢得多,但它的安装成本非常低,而且具有很强的弹性,覆盖的距离也要长得多。低级服务器的典型带宽为1GBit/s。高端设备(如云中的[C5实例](https://aws.amazon.com/ec2/instance-types/c5/))提供10100GBit/s的带宽。与以前所有的情况一样,数据传输有很大的开销。请注意,原始以太网几乎从不被直接使用,而是在物理互连之上使用执行的协议(例如UDP或TCP/IP)。这进一步增加了开销。与PCIe类似,以太网旨在连接两个设备,例如计算机和交换机。\n",
"* **交换机**,一种连接多个设备的方式,该连接方式下的任何一对设备都可以同时执行(通常是全带宽)点对点连接。例如,以太网交换机可能以高带宽连接$40$台服务器。请注意,交换机并不是传统计算机网络所独有的。甚至PCIe通道也可以是[可交换的](https://www.broadcom.com/products/pcie-switches-bridges/pcie-switches),例如:[P2实例](https://aws.amazon.com/ec2/instance-types/p2/)就是将大量GPU连接到主机处理器。\n",
"* **NVLink**,是PCIe的替代品,适用于非常高带宽的互连。它为每条链路提供高达300Gbit/s的数据传输速率。服务器GPU(Volta V100)有六个链路。而消费级GPU(RTX 2080Ti)只有一个链路,运行速度也降低到100Gbit/s。建议使用[NCCL](https://github.com/NVIDIA/nccl)来实现GPU之间的高速数据传输。\n",
"\n",
"## 更多延迟\n",
"\n",
" :numref:`table_latency_numbers`和 :numref:`table_latency_numbers_tesla`中的小结来自[Eliot Eshelman](https://gist.github.com/eshelman),他们将数字的更新版本保存到[GitHub gist](https://gist.github.com/eshelman/343a1c46cb3fba142c1afdcdeec17646)。\n",
"\n",
":常见延迟。\n",
"\n",
"| Action | Time | Notes |\n",
"| :----------------------------------------- | -----: | :---------------------------------------------- |\n",
"| L1 cache reference/hit | 1.5 ns | 4 cycles |\n",
"| Floating-point add/mult/FMA | 1.5 ns | 4 cycles |\n",
"| L2 cache reference/hit | 5 ns | 12 ~ 17 cycles |\n",
"| Branch mispredict | 6 ns | 15 ~ 20 cycles |\n",
"| L3 cache hit (unshared cache) | 16 ns | 42 cycles |\n",
"| L3 cache hit (shared in another core) | 25 ns | 65 cycles |\n",
"| Mutex lock/unlock | 25 ns | |\n",
"| L3 cache hit (modified in another core) | 29 ns | 75 cycles |\n",
"| L3 cache hit (on a remote CPU socket) | 40 ns | 100 ~ 300 cycles (40 ~ 116 ns) |\n",
"| QPI hop to a another CPU (per hop) | 40 ns | |\n",
"| 64MB memory ref. (local CPU) | 46 ns | TinyMemBench on Broadwell E5-2690v4 |\n",
"| 64MB memory ref. (remote CPU) | 70 ns | TinyMemBench on Broadwell E5-2690v4 |\n",
"| 256MB memory ref. (local CPU) | 75 ns | TinyMemBench on Broadwell E5-2690v4 |\n",
"| Intel Optane random write | 94 ns | UCSD Non-Volatile Systems Lab |\n",
"| 256MB memory ref. (remote CPU) | 120 ns | TinyMemBench on Broadwell E5-2690v4 |\n",
"| Intel Optane random read | 305 ns | UCSD Non-Volatile Systems Lab |\n",
"| Send 4KB over 100 Gbps HPC fabric | 1 μs | MVAPICH2 over Intel Omni-Path |\n",
"| Compress 1KB with Google Snappy | 3 μs | |\n",
"| Send 4KB over 10 Gbps ethernet | 10 μs | |\n",
"| Write 4KB randomly to NVMe SSD | 30 μs | DC P3608 NVMe SSD (QOS 99% is 500μs) |\n",
"| Transfer 1MB to/from NVLink GPU | 30 μs | ~33GB/s on NVIDIA 40GB NVLink |\n",
"| Transfer 1MB to/from PCI-E GPU | 80 μs | ~12GB/s on PCIe 3.0 x16 link |\n",
"| Read 4KB randomly from NVMe SSD | 120 μs | DC P3608 NVMe SSD (QOS 99%) |\n",
"| Read 1MB sequentially from NVMe SSD | 208 μs | ~4.8GB/s DC P3608 NVMe SSD |\n",
"| Write 4KB randomly to SATA SSD | 500 μs | DC S3510 SATA SSD (QOS 99.9%) |\n",
"| Read 4KB randomly from SATA SSD | 500 μs | DC S3510 SATA SSD (QOS 99.9%) |\n",
"| Round trip within same datacenter | 500 μs | One-way ping is ~250μs |\n",
"| Read 1MB sequentially from SATA SSD | 2 ms | ~550MB/s DC S3510 SATA SSD |\n",
"| Read 1MB sequentially from disk | 5 ms | ~200MB/s server HDD |\n",
"| Random Disk Access (seek+rotation) | 10 ms | |\n",
"| Send packet CA->Netherlands->CA | 150 ms | |\n",
":label:`table_latency_numbers`\n",
"\n",
":NVIDIA Tesla GPU的延迟.\n",
"\n",
"| Action | Time | Notes |\n",
"| :------------------------------ | -----: | :---------------------------------------- |\n",
"| GPU Shared Memory access | 30 ns | 30~90 cycles (bank conflicts add latency) |\n",
"| GPU Global Memory access | 200 ns | 200~800 cycles |\n",
"| Launch CUDA kernel on GPU | 10 μs | Host CPU instructs GPU to start kernel |\n",
"| Transfer 1MB to/from NVLink GPU | 30 μs | ~33GB/s on NVIDIA 40GB NVLink |\n",
"| Transfer 1MB to/from PCI-E GPU | 80 μs | ~12GB/s on PCI-Express x16 link |\n",
":label:`table_latency_numbers_tesla`\n",
"\n",
"## 小结\n",
"\n",
"* 设备有运行开销。因此,数据传输要争取量大次少而不是量少次多。这适用于RAM、固态驱动器、网络和GPU。\n",
"* 矢量化是性能的关键。确保充分了解加速器的特定功能。例如,一些Intel Xeon CPU特别适用于INT8操作,NVIDIA Volta GPU擅长FP16矩阵操作,NVIDIA Turing擅长FP16、INT8和INT4操作。\n",
"* 在训练过程中数据类型过小导致的数值溢出可能是个问题(在推断过程中则影响不大)。\n",
"* 数据混叠现象会导致严重的性能退化。$64$位CPU应该按照$64$位边界进行内存对齐。在GPU上建议保持卷积大小对齐,例如:与张量核对齐。\n",
"* 将算法与硬件相匹配(例如,内存占用和带宽)。将命中参数装入缓存后,可以实现很大数量级的加速比。\n",
"* 在验证实验结果之前,建议先在纸上勾勒出新算法的性能。关注的原因是数量级及以上的差异。\n",
"* 使用调试器跟踪调试寻找性能的瓶颈。\n",
"* 训练硬件和推断硬件在性能和价格方面有不同的优点。\n",
"\n",
"## 练习\n",
"\n",
"1. 编写C语言来测试访问对齐的内存和未对齐的内存之间的速度是否有任何差异。(提示:小心缓存影响。)\n",
"1. 测试按顺序访问或按给定步幅访问内存时的速度差异。\n",
"1. 如何测量CPU上的缓存大小?\n",
"1. 如何在多个内存通道中分配数据以获得最大带宽?如果有许多小的线程,会怎么布置?\n",
"1. 一个企业级硬盘正在以10000转/分的速度旋转。在最坏的情况下,硬盘读取数据所需的最短时间是多少(假设磁头几乎是瞬间移动的)?为什么2.5英寸硬盘在商用服务器上越来越流行(相对于3.5英寸硬盘和5.25英寸硬盘)?\n",
"1. 假设HDD制造商将存储密度从每平方英寸1 Tbit增加到每平方英寸5 Tbit。在一个2.5英寸的硬盘上,多少信息能够存储一个环中?内轨和外轨有区别吗?\n",
"1. 从$8$位数据类型到$16$位数据类型,硅片的数量大约增加了四倍,为什么?为什么NVIDIA会在其图灵GPU中添加INT4运算?\n",
"1. 在内存中向前读比向后读快多少?该数字在不同的计算机和CPU供应商之间是否有所不同?为什么?编写C代码进行实验。\n",
"1. 磁盘的缓存大小能否测量?典型的硬盘是多少?固态驱动器需要缓存吗?\n",
"1. 测量通过以太网发送消息时的数据包开销。查找UDP和TCP/IP连接之间的差异。\n",
"1. 直接内存访问允许CPU以外的设备直接向内存写入(和读取)。为什么要这样?\n",
"1. 看看Turing T4GPU的性能数字。为什么从FP16到INT8和INT4的性能只翻倍?\n",
"1. 一个网络包从旧金山到阿姆斯特丹的往返旅行需要多长时间?提示:可以假设距离为10000公里。\n",
"\n",
"[Discussions](https://discuss.d2l.ai/t/5717)\n"
]
}
],
"metadata": {
"language_info": {
"name": "python"
},
"required_libs": []
},
"nbformat": 4,
"nbformat_minor": 5
}
@@ -0,0 +1,507 @@
{
"cells": [
{
"cell_type": "markdown",
"id": "b73c8f7f",
"metadata": {
"origin_pos": 0
},
"source": [
"# 编译器和解释器\n",
":label:`sec_hybridize`\n",
"\n",
"目前为止,本书主要关注的是*命令式编程*imperative programming)。\n",
"命令式编程使用诸如`print`、“`+`”和`if`之类的语句来更改程序的状态。\n",
"考虑下面这段简单的命令式程序:\n"
]
},
{
"cell_type": "code",
"execution_count": 1,
"id": "2f96dffd",
"metadata": {
"execution": {
"iopub.execute_input": "2023-08-18T06:58:16.571866Z",
"iopub.status.busy": "2023-08-18T06:58:16.571326Z",
"iopub.status.idle": "2023-08-18T06:58:16.580794Z",
"shell.execute_reply": "2023-08-18T06:58:16.579992Z"
},
"origin_pos": 1,
"tab": [
"pytorch"
]
},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"10\n"
]
}
],
"source": [
"def add(a, b):\n",
" return a + b\n",
"\n",
"def fancy_func(a, b, c, d):\n",
" e = add(a, b)\n",
" f = add(c, d)\n",
" g = add(e, f)\n",
" return g\n",
"\n",
"print(fancy_func(1, 2, 3, 4))"
]
},
{
"cell_type": "markdown",
"id": "fa1758bc",
"metadata": {
"origin_pos": 2
},
"source": [
"Python是一种*解释型语言*interpreted language)。因此,当对上面的`fancy_func`函数求值时,它按顺序执行函数体的操作。也就是说,它将通过对`e = add(a, b)`求值,并将结果存储为变量`e`,从而更改程序的状态。接下来的两个语句`f = add(c, d)`和`g = add(e, f)`也将执行类似地操作,即执行加法计算并将结果存储为变量。 :numref:`fig_compute_graph`说明了数据流。\n",
"\n",
"![命令式编程中的数据流](../img/computegraph.svg)\n",
":label:`fig_compute_graph`\n",
"\n",
"尽管命令式编程很方便,但可能效率不高。一方面原因,Python会单独执行这三个函数的调用,而没有考虑`add`函数在`fancy_func`中被重复调用。如果在一个GPU(甚至多个GPU)上执行这些命令,那么Python解释器产生的开销可能会非常大。此外,它需要保存`e`和`f`的变量值,直到`fancy_func`中的所有语句都执行完毕。这是因为程序不知道在执行语句`e = add(a, b)`和`f = add(c, d)`之后,其他部分是否会使用变量`e`和`f`。\n",
"\n",
"## 符号式编程\n",
"\n",
"考虑另一种选择*符号式编程*symbolic programming),即代码通常只在完全定义了过程之后才执行计算。这个策略被多个深度学习框架使用,包括Theano和TensorFlow(后者已经获得了命令式编程的扩展)。一般包括以下步骤:\n",
"\n",
"1. 定义计算流程;\n",
"1. 将流程编译成可执行的程序;\n",
"1. 给定输入,调用编译好的程序执行。\n",
"\n",
"这将允许进行大量的优化。首先,在大多数情况下,我们可以跳过Python解释器。从而消除因为多个更快的GPU与单个CPU上的单个Python线程搭配使用时产生的性能瓶颈。其次,编译器可以将上述代码优化和重写为`print((1 + 2) + (3 + 4))`甚至`print(10)`。因为编译器在将其转换为机器指令之前可以看到完整的代码,所以这种优化是可以实现的。例如,只要某个变量不再需要,编译器就可以释放内存(或者从不分配内存),或者将代码转换为一个完全等价的片段。下面,我们将通过模拟命令式编程来进一步了解符号式编程的概念。\n"
]
},
{
"cell_type": "code",
"execution_count": 2,
"id": "ccb650c9",
"metadata": {
"execution": {
"iopub.execute_input": "2023-08-18T06:58:16.584271Z",
"iopub.status.busy": "2023-08-18T06:58:16.583746Z",
"iopub.status.idle": "2023-08-18T06:58:16.589230Z",
"shell.execute_reply": "2023-08-18T06:58:16.588464Z"
},
"origin_pos": 3,
"tab": [
"pytorch"
]
},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"\n",
"def add(a, b):\n",
" return a + b\n",
"\n",
"def fancy_func(a, b, c, d):\n",
" e = add(a, b)\n",
" f = add(c, d)\n",
" g = add(e, f)\n",
" return g\n",
"print(fancy_func(1, 2, 3, 4))\n",
"10\n"
]
}
],
"source": [
"def add_():\n",
" return '''\n",
"def add(a, b):\n",
" return a + b\n",
"'''\n",
"\n",
"def fancy_func_():\n",
" return '''\n",
"def fancy_func(a, b, c, d):\n",
" e = add(a, b)\n",
" f = add(c, d)\n",
" g = add(e, f)\n",
" return g\n",
"'''\n",
"\n",
"def evoke_():\n",
" return add_() + fancy_func_() + 'print(fancy_func(1, 2, 3, 4))'\n",
"\n",
"prog = evoke_()\n",
"print(prog)\n",
"y = compile(prog, '', 'exec')\n",
"exec(y)"
]
},
{
"cell_type": "markdown",
"id": "2054d959",
"metadata": {
"origin_pos": 4
},
"source": [
"命令式(解释型)编程和符号式编程的区别如下:\n",
"\n",
"* 命令式编程更容易使用。在Python中,命令式编程的大部分代码都是简单易懂的。命令式编程也更容易调试,这是因为无论是获取和打印所有的中间变量值,或者使用Python的内置调试工具都更加简单;\n",
"* 符号式编程运行效率更高,更易于移植。符号式编程更容易在编译期间优化代码,同时还能够将程序移植到与Python无关的格式中,从而允许程序在非Python环境中运行,避免了任何潜在的与Python解释器相关的性能问题。\n",
"\n",
"## 混合式编程\n",
"\n",
"历史上,大部分深度学习框架都在命令式编程与符号式编程之间进行选择。例如,Theano、TensorFlow(灵感来自前者)、Keras和CNTK采用了符号式编程。相反地,Chainer和PyTorch采取了命令式编程。在后来的版本更新中,TensorFlow2.0和Keras增加了命令式编程。\n"
]
},
{
"cell_type": "markdown",
"id": "2bc27dfe",
"metadata": {
"origin_pos": 6,
"tab": [
"pytorch"
]
},
"source": [
"如上所述,PyTorch是基于命令式编程并且使用动态计算图。为了能够利用符号式编程的可移植性和效率,开发人员思考能否将这两种编程模型的优点结合起来,于是就产生了torchscript。torchscript允许用户使用纯命令式编程进行开发和调试,同时能够将大多数程序转换为符号式程序,以便在需要产品级计算性能和部署时使用。\n"
]
},
{
"cell_type": "markdown",
"id": "b88d0031",
"metadata": {
"origin_pos": 9
},
"source": [
"## `Sequential`的混合式编程\n",
"\n",
"要了解混合式编程的工作原理,最简单的方法是考虑具有多层的深层网络。按照惯例,Python解释器需要执行所有层的代码来生成一条指令,然后将该指令转发到CPU或GPU。对于单个的(快速的)计算设备,这不会导致任何重大问题。另一方面,如果我们使用先进的8-GPU服务器,比如AWS P3dn.24xlarge实例,Python将很难让所有的GPU都保持忙碌。在这里,瓶颈是单线程的Python解释器。让我们看看如何通过将`Sequential`替换为`HybridSequential`来解决代码中这个瓶颈。首先,我们定义一个简单的多层感知机。\n"
]
},
{
"cell_type": "code",
"execution_count": 3,
"id": "65533e8b",
"metadata": {
"execution": {
"iopub.execute_input": "2023-08-18T06:58:16.592892Z",
"iopub.status.busy": "2023-08-18T06:58:16.592388Z",
"iopub.status.idle": "2023-08-18T06:58:18.663997Z",
"shell.execute_reply": "2023-08-18T06:58:18.662987Z"
},
"origin_pos": 11,
"tab": [
"pytorch"
]
},
"outputs": [
{
"data": {
"text/plain": [
"tensor([[ 0.0722, -0.0190]], grad_fn=<AddmmBackward0>)"
]
},
"execution_count": 3,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"import torch\n",
"from torch import nn\n",
"from d2l import torch as d2l\n",
"\n",
"\n",
"# 生产网络的工厂模式\n",
"def get_net():\n",
" net = nn.Sequential(nn.Linear(512, 256),\n",
" nn.ReLU(),\n",
" nn.Linear(256, 128),\n",
" nn.ReLU(),\n",
" nn.Linear(128, 2))\n",
" return net\n",
"\n",
"x = torch.randn(size=(1, 512))\n",
"net = get_net()\n",
"net(x)"
]
},
{
"cell_type": "markdown",
"id": "c4c394a8",
"metadata": {
"origin_pos": 15,
"tab": [
"pytorch"
]
},
"source": [
"通过使用`torch.jit.script`函数来转换模型,我们就有能力编译和优化多层感知机中的计算,而模型的计算结果保持不变。\n"
]
},
{
"cell_type": "code",
"execution_count": 4,
"id": "ac75ec68",
"metadata": {
"execution": {
"iopub.execute_input": "2023-08-18T06:58:18.669810Z",
"iopub.status.busy": "2023-08-18T06:58:18.668614Z",
"iopub.status.idle": "2023-08-18T06:58:18.805275Z",
"shell.execute_reply": "2023-08-18T06:58:18.804217Z"
},
"origin_pos": 19,
"tab": [
"pytorch"
]
},
"outputs": [
{
"data": {
"text/plain": [
"tensor([[ 0.0722, -0.0190]], grad_fn=<AddmmBackward0>)"
]
},
"execution_count": 4,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"net = torch.jit.script(net)\n",
"net(x)"
]
},
{
"cell_type": "markdown",
"id": "a6620d01",
"metadata": {
"origin_pos": 23,
"tab": [
"pytorch"
]
},
"source": [
"我们编写与之前相同的代码,再使用`torch.jit.script`简单地转换模型,当完成这些任务后,网络就将得到优化(我们将在下面对性能进行基准测试)。\n"
]
},
{
"cell_type": "markdown",
"id": "49dd9081",
"metadata": {
"origin_pos": 26
},
"source": [
"### 通过混合式编程加速\n",
"\n",
"为了证明通过编译获得了性能改进,我们比较了混合编程前后执行`net(x)`所需的时间。让我们先定义一个度量时间的类,它在本章中在衡量(和改进)模型性能时将非常有用。\n"
]
},
{
"cell_type": "code",
"execution_count": 5,
"id": "843b1333",
"metadata": {
"execution": {
"iopub.execute_input": "2023-08-18T06:58:18.809971Z",
"iopub.status.busy": "2023-08-18T06:58:18.809674Z",
"iopub.status.idle": "2023-08-18T06:58:18.815218Z",
"shell.execute_reply": "2023-08-18T06:58:18.814277Z"
},
"origin_pos": 27,
"tab": [
"pytorch"
]
},
"outputs": [],
"source": [
"#@save\n",
"class Benchmark:\n",
" \"\"\"用于测量运行时间\"\"\"\n",
" def __init__(self, description='Done'):\n",
" self.description = description\n",
"\n",
" def __enter__(self):\n",
" self.timer = d2l.Timer()\n",
" return self\n",
"\n",
" def __exit__(self, *args):\n",
" print(f'{self.description}: {self.timer.stop():.4f} sec')"
]
},
{
"cell_type": "markdown",
"id": "f007d153",
"metadata": {
"origin_pos": 29,
"tab": [
"pytorch"
]
},
"source": [
"现在我们可以调用网络两次,一次使用torchscript,一次不使用torchscript。\n"
]
},
{
"cell_type": "code",
"execution_count": 6,
"id": "429dcf27",
"metadata": {
"execution": {
"iopub.execute_input": "2023-08-18T06:58:18.819415Z",
"iopub.status.busy": "2023-08-18T06:58:18.819129Z",
"iopub.status.idle": "2023-08-18T06:58:19.098924Z",
"shell.execute_reply": "2023-08-18T06:58:19.097877Z"
},
"origin_pos": 33,
"tab": [
"pytorch"
]
},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"无torchscript: 0.1361 sec\n",
"有torchscript: 0.1204 sec\n"
]
}
],
"source": [
"net = get_net()\n",
"with Benchmark('无torchscript'):\n",
" for i in range(1000): net(x)\n",
"\n",
"net = torch.jit.script(net)\n",
"with Benchmark('有torchscript'):\n",
" for i in range(1000): net(x)"
]
},
{
"cell_type": "markdown",
"id": "67b1621c",
"metadata": {
"origin_pos": 37,
"tab": [
"pytorch"
]
},
"source": [
"如以上结果所示,在`nn.Sequential`的实例被函数`torch.jit.script`脚本化后,通过使用符号式编程提高了计算性能。\n"
]
},
{
"cell_type": "markdown",
"id": "55f995fe",
"metadata": {
"origin_pos": 40
},
"source": [
"### 序列化\n"
]
},
{
"cell_type": "markdown",
"id": "77ddf279",
"metadata": {
"origin_pos": 42,
"tab": [
"pytorch"
]
},
"source": [
"编译模型的好处之一是我们可以将模型及其参数序列化(保存)到磁盘。这允许这些训练好的模型部署到其他设备上,并且还能方便地使用其他前端编程语言。同时,通常编译模型的代码执行速度也比命令式编程更快。让我们看看`save`的实际功能。\n"
]
},
{
"cell_type": "code",
"execution_count": 7,
"id": "5109f057",
"metadata": {
"execution": {
"iopub.execute_input": "2023-08-18T06:58:19.104418Z",
"iopub.status.busy": "2023-08-18T06:58:19.103582Z",
"iopub.status.idle": "2023-08-18T06:58:19.271595Z",
"shell.execute_reply": "2023-08-18T06:58:19.270264Z"
},
"origin_pos": 46,
"tab": [
"pytorch"
]
},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"-rw-r--r-- 1 ci ci 651K Aug 18 06:58 my_mlp\r\n"
]
}
],
"source": [
"net.save('my_mlp')\n",
"!ls -lh my_mlp*"
]
},
{
"cell_type": "markdown",
"id": "7e1bcc1c",
"metadata": {
"origin_pos": 60
},
"source": [
"## 小结\n",
"\n",
"* 命令式编程使得新模型的设计变得容易,因为可以依据控制流编写代码,并拥有相对成熟的Python软件生态。\n",
"* 符号式编程要求我们先定义并且编译程序,然后再执行程序,其好处是提高了计算性能。\n"
]
},
{
"cell_type": "markdown",
"id": "b573ae3c",
"metadata": {
"origin_pos": 62
},
"source": [
"## 练习\n"
]
},
{
"cell_type": "markdown",
"id": "e6c4afe2",
"metadata": {
"origin_pos": 64,
"tab": [
"pytorch"
]
},
"source": [
"1. 回顾前几章中感兴趣的模型,能提高它们的计算性能吗?\n"
]
},
{
"cell_type": "markdown",
"id": "5db5892b",
"metadata": {
"origin_pos": 66,
"tab": [
"pytorch"
]
},
"source": [
"[Discussions](https://discuss.d2l.ai/t/2788)\n"
]
}
],
"metadata": {
"language_info": {
"name": "python"
},
"required_libs": []
},
"nbformat": 4,
"nbformat_minor": 5
}
@@ -0,0 +1,40 @@
{
"cells": [
{
"cell_type": "markdown",
"id": "88dd3320",
"metadata": {
"origin_pos": 0
},
"source": [
"# 计算性能\n",
":label:`chap_performance`\n",
"\n",
"在深度学习中,数据集和模型通常都很大,导致计算量也会很大。\n",
"因此,计算的性能非常重要。\n",
"本章将集中讨论影响计算性能的主要因素:命令式编程、符号编程、\n",
"异步计算、自动并行和多GPU计算。\n",
"通过学习本章,对于前几章中实现的那些模型,可以进一步提高它们的计算性能。\n",
"例如,我们可以在不影响准确性的前提下,大大减少训练时间。\n",
"\n",
":begin_tab:toc\n",
" - [hybridize](hybridize.ipynb)\n",
" - [async-computation](async-computation.ipynb)\n",
" - [auto-parallelism](auto-parallelism.ipynb)\n",
" - [hardware](hardware.ipynb)\n",
" - [multiple-gpus](multiple-gpus.ipynb)\n",
" - [multiple-gpus-concise](multiple-gpus-concise.ipynb)\n",
" - [parameterserver](parameterserver.ipynb)\n",
":end_tab:\n"
]
}
],
"metadata": {
"language_info": {
"name": "python"
},
"required_libs": []
},
"nbformat": 4,
"nbformat_minor": 5
}
File diff suppressed because it is too large Load Diff
File diff suppressed because it is too large Load Diff
@@ -0,0 +1,124 @@
{
"cells": [
{
"cell_type": "markdown",
"id": "8082b37d",
"metadata": {
"origin_pos": 0
},
"source": [
"# 参数服务器\n",
":label:`sec_parameterserver`\n",
"\n",
"当我们从一个GPU迁移到多个GPU时,以及再迁移到包含多个GPU的多个服务器时(可能所有服务器的分布跨越了多个机架和多个网络交换机),分布式并行训练算法也需要变得更加复杂。通过细节可以知道,一方面是不同的互连方式的带宽存在极大的区别(例如,NVLink可以通过设置实现跨$6$条链路的高达100GB/s的带宽,16通道的PCIe4.0提供32GB/s的带宽,而即使是高速100GbE以太网也只能提供大约10GB/s的带宽);另一方面是期望开发者既能完成统计学习建模还精通系统和网络也是不切实际的。\n",
"\n",
"参数服务器的核心思想首先是由 :cite:`Smola.Narayanamurthy.2010`在分布式隐变量模型的背景下引入的。然后,在 :cite:`Ahmed.Aly.Gonzalez.ea.2012`中描述了Push和Pull的语义,又在 :cite:`Li.Andersen.Park.ea.2014`中描述了系统和开源库。下面,我们将介绍用于提高计算效率的组件。\n",
"\n",
"## 数据并行训练\n",
"\n",
"让我们回顾一下在分布式架构中数据并行的训练方法,因为在实践中它的实现相对简单,因此本节将排除其他内容只对其进行介绍。由于当今的GPU拥有大量的显存,因此在实际场景中(不包括图深度学习)只有数据并行这种并行训练策略值得推荐。图 :numref:`fig_parameterserver`描述了在 :numref:`sec_multi_gpu`中实现的数据并行的变体。其中的关键是梯度的聚合需要在单个GPU(GPU 0)上完成,然后再将更新后的参数广播给所有GPU。\n",
"\n",
"![左图是单GPU训练;右图是多GPU训练的一个变体:(1)计算损失和梯度,(2)所有梯度聚合在一个GPU上,(3)发生参数更新,并将参数重新广播给所有GPU](../img/ps.svg)\n",
":label:`fig_parameterserver`\n",
"\n",
"回顾来看,选择GPU 0进行聚合似乎是个很随便的决定,当然也可以选择CPU上聚合,事实上只要优化算法支持,在实际操作中甚至可以在某个GPU上聚合其中一些参数,而在另一个GPU上聚合另一些参数。例如,如果有四个与参数向量相关的梯度$\\mathbf{g}_1, \\ldots, \\mathbf{g}_4$,还可以一个GPU对一个$\\mathbf{g}_i (i = 1, \\ldots, 4$)地进行梯度聚合。\n",
"\n",
"这样的推断似乎是轻率和武断的,毕竟数学应该是逻辑自洽的。但是,我们处理的是如 :numref:`sec_hardware`中所述的真实的物理硬件,其中不同的总线具有不同的带宽。考虑一个如 :numref:`sec_hardware`中所述的真实的$4$路GPU服务器。如果它的连接是特别完整的,那么可能拥有一个100GbE的网卡。更有代表性的数字是1-10GbE范围内,其有效带宽为100MB/s到1GB/s。因为CPU的PCIe通道太少(例如,消费级的Intel CPU有$24$个通道),所以无法直接与所有的GPU相连接,因此需要[multiplexer](https://www.broadcom.com/products/pcie-switches-bridges/pcie-switches)。CPU在16x Gen3链路上的带宽为16GB/s,这也是每个GPU连接到交换机的速度,这意味着GPU设备之间的通信更有效。\n",
"\n",
"![一个4路GPU服务器](../img/bw-hierarchy.svg)\n",
":label:`fig_bw_hierarchy`\n",
"\n",
"为了便于讨论,我们假设所有梯度共需160MB。在这种情况下,将其中$3$个GPU的梯度发送到第$4$个GPU上需要$30$毫秒(每次传输需要$10$毫秒=160MB/16GB/s)。再加上$30$毫秒将权重向量传输回来,得到的结果是总共需要$60$毫秒。如果将所有的数据发送到CPU,总共需要$80$毫秒,其中将有$40$毫秒的惩罚,因为$4$个GPU每个都需要将数据发送到CPU。最后,假设能够将梯度分为$4$个部分,每个部分为$40$MB,现在可以在不同的GPU上同时聚合每个部分。因为PCIe交换机在所有链路之间提供全带宽操作,所以传输需要$2.5\\times 3=7.5$毫秒,而不是$30$毫秒,因此同步操作总共需要$15$毫秒。简而言之,一样的参数同步操作基于不同的策略时间可能在$15$毫秒到$80$毫秒之间。 :numref:`fig_ps_distributed`描述了交换参数的不同策略。\n",
"\n",
"![参数同步策略](../img/ps-distributed.svg)\n",
":label:`fig_ps_distributed`\n",
"\n",
"请注意,我们还可以使用另一个工具来改善性能:在深度网络中,从顶部到底部计算所有梯度需要一些时间,因此即使还在忙着为某些参数计算梯度时,就可以开始为准备好的参数同步梯度了。想了解详细信息可以参见 :cite:`Sergeev.Del-Balso.2018`,想知道如何操作可参考[Horovod](https://github.com/horovod/horovod)。\n",
"\n",
"## 环同步(Ring Synchronization\n",
"\n",
"当谈及现代深度学习硬件的同步问题时,我们经常会遇到大量的定制的网络连接。例如,AWS p3.16xlarge和NVIDIA DGX-2实例中的连接都使用了 :numref:`fig_nvlink`中的结构。每个GPU通过PCIe链路连接到主机CPU,该链路最多只能以16GB/s的速度运行。此外,每个GPU还具有$6$个NVLink连接,每个NVLink连接都能够以300Gbit/s进行双向传输。这相当于每个链路每个方向约$300\\div 8\\div 2\\approx 18 \\mathrm{GB/s}$。简言之,聚合的NVLink带宽明显高于PCIe带宽,问题是如何有效地使用它。\n",
"\n",
"![在8台V100 GPU服务器上连接NVLink(图片由英伟达提供)](../img/nvlink.svg)\n",
":label:`fig_nvlink`\n",
"\n",
" :cite:`Wang.Li.Liberty.ea.2018`的研究结果表明最优的同步策略是将网络分解成两个环,并基于两个环直接同步数据。\n",
" :numref:`fig_nvlink_twoloop`描述了网络可以分解为一个具有双NVLink带宽的环(1-2-3-4-5-6-7-8-1)和一个具有常规带宽的环(1-4-6-3-5-8-2-7-1)。在这种情况下,设计一个高效的同步协议是非常重要的。\n",
"\n",
"![将NVLink网络分解为两个环。](../img/nvlink-twoloop.svg)\n",
":label:`fig_nvlink_twoloop`\n",
"\n",
"考虑下面的思维试验:给定由$n$个计算节点(或GPU)组成的一个环,梯度可以从第一个节点发送到第二个节点,在第二个结点将本地的梯度与传送的梯度相加并发送到第三个节点,依此类推。在$n-1$步之后,可以在最后访问的节点中找到聚合梯度。也就是说,聚合梯度的时间随节点数线性增长。但如果照此操作,算法是相当低效的。归根结底,在任何时候都只有一个节点在通信。如果我们将梯度分为$n$个块,并从节点$i$开始同步块$i$,会怎么样?因为每个块的大小是$1/n$,所以总时间现在是$(n-1)/n \\approx 1$。换句话说,当我们增大环的大小时,聚合梯度所花费的时间不会增加。这是一个相当惊人的结果。 :numref:`fig_ringsync`说明了$n=4$个节点上的步骤顺序。\n",
"\n",
"![跨4个节点的环同步。每个节点开始向其左邻居发送部分梯度,直到在其右邻居中找到聚合的梯度](../img/ringsync.svg)\n",
":label:`fig_ringsync`\n",
"\n",
"如果我们使用相同的例子,跨$8$个V100 GPU同步160MB,我们得到的结果大约是$2 \\times 160 \\mathrm{MB} \\div (3 \\times18 \\mathrm{GB/s}) \\approx 6 \\mathrm{ms}$。这比使用PCIe总线要好,即使我们现在使用的是$8$个GPU。请注意,这些数字在实践中通常会差一些,因为深度学习框架无法将通信组合成大的突发传输。\n",
"\n",
"注意到有一种常见的误解认为环同步与其他同步算法在本质上是不同的,实际上与简单的树算法相比其唯一的区别是同步路径稍微精细一些。\n",
"\n",
"## 多机训练\n",
"\n",
"新的挑战出现在多台机器上进行分布式训练:我们需要服务器之间相互通信,而这些服务器又只通过相对较低的带宽结构连接,在某些情况下这种连接的速度可能会慢一个数量级,因此跨设备同步是个棘手的问题。毕竟,在不同机器上运行训练代码的速度会有细微的差别,因此如果想使用分布式优化的同步算法就需要*同步*(synchronize)这些机器。\n",
" :numref:`fig_ps_multimachine`说明了分布式并行训练是如何发生的。\n",
"\n",
"1. 在每台机器上读取一组(不同的)批量数据,在多个GPU之间分割数据并传输到GPU的显存中。基于每个GPU上的批量数据分别计算预测和梯度。\n",
"2. 来自一台机器上的所有的本地GPU的梯度聚合在一个GPU上(或者在不同的GPU上聚合梯度的某些部分)。\n",
"3. 每台机器的梯度被发送到其本地CPU中。\n",
"4. 所有的CPU将梯度发送到中央参数服务器中,由该服务器聚合所有梯度。\n",
"5. 然后使用聚合后的梯度来更新参数,并将更新后的参数广播回各个CPU中。\n",
"6. 更新后的参数信息发送到本地一个(或多个)GPU中。\n",
"7. 所有GPU上的参数更新完成。\n",
"\n",
"![多机多GPU分布式并行训练](../img/ps-multimachine.svg)\n",
":label:`fig_ps_multimachine`\n",
"\n",
"以上这些操作似乎都相当简单,而且事实上它们可以在一台机器内高效地执行,但是当我们考虑多台机器时,就会发现中央的参数服务器成为了瓶颈。毕竟,每个服务器的带宽是有限的,因此对$m$个工作节点来说,将所有梯度发送到服务器所需的时间是$\\mathcal{O}(m)$。我们也可以通过将参数服务器数量增加到$n$来突破这一障碍。此时,每个服务器只需要存储$\\mathcal{O}(1/n)$个参数,因此更新和优化的总时间变为$\\mathcal{O}(m/n)$。这两个数字的匹配会产生稳定的伸缩性,而不用在乎我们需要处理多少工作节点。在实际应用中,我们使用同一台机器既作为工作节点还作为服务器。设计说明请参考 :numref:`fig_ps_multips`(技术细节请参考 :cite:`Li.Andersen.Park.ea.2014`)。特别是,确保多台机器只在没有不合理延迟的情况下工作是相当困难的。\n",
"\n",
"![上图:单参数服务器是一个瓶颈,因为它的带宽是有限的;下图:多参数服务器使用聚合带宽存储部分参数](../img/ps-multips.svg)\n",
":label:`fig_ps_multips`\n",
"\n",
"## 键值存储\n",
"\n",
"在实践中,实现分布式多GPU训练所需要的步骤绝非易事。这就是公共抽象值得使用的原因,公共抽象即重新定义具有更新语义的*键-值存储*(key-value store)的抽象。\n",
"\n",
"在许多工作节点和许多GPU中,梯度$i$的计算可以定义为\n",
"\n",
"$$\\mathbf{g}_{i} = \\sum_{k \\in \\text{workers}} \\sum_{j \\in \\text{GPUs}} \\mathbf{g}_{ijk},$$\n",
"\n",
"其中$\\mathbf{g}_{ijk}$是在工作节点$k$的GPU$j$上拆分的梯度$i$的一部分。这个运算的关键在于它是一个*交换归约*commutative reduction),也就是说,它把许多向量变换成一个向量,而运算顺序在完成向量变换时并不重要。这对实现我们的目标来说是非常好的,因为不需要为何时接收哪个梯度进行细粒度的控制。此外,请注意,这个操作在不同的$i$之间是独立的。\n",
"\n",
"这就允许我们定义下面两个操作:*push*(用于累积梯度)和*pull*(用于取得聚合梯度)。因为我们有很多层,也就有很多不同的梯度集合,因此需要用一个键$i$来对梯度建索引。这个与Dynamo :cite:`DeCandia.Hastorun.Jampani.ea.2007`中引入的*键-值存储*之间存在相似性并非巧合。它们两个定义都拥有许多相似的性质,特别是在多个服务器之间分发参数时。\n",
"\n",
"*键-值存储*的push与pull操作描述如下:\n",
"\n",
"* **pushkeyvalue)**将特定的梯度值从工作节点发送到公共存储,在那里通过某种方式(例如,相加)来聚合值;\n",
"* **pullkeyvalue)**从公共存储中取得某种方式(例如,组合来自所有工作节点的梯度)的聚合值。\n",
"\n",
"通过将同步的所有复杂性隐藏在一个简单的push和pull操作背后,我们可以将统计建模人员(他们希望能够用简单的术语表达优化)和系统工程师(他们需要处理分布式同步中固有的复杂性)的关注点解耦。\n",
"\n",
"## 小结\n",
"\n",
"* 同步需要高度适应特定的网络基础设施和服务器内的连接,这种适应会严重影响同步所需的时间。\n",
"* 环同步对于p3和DGX-2服务器是最佳的,而对于其他服务器则未必。\n",
"* 当添加多个参数服务器以增加带宽时,分层同步策略可以工作的很好。\n",
"\n",
"## 练习\n",
"\n",
"1. 请尝试进一步提高环同步的性能吗。(提示:可以双向发送消息。)\n",
"1. 在计算仍在进行中,可否允许执行异步通信?它将如何影响性能?\n",
"1. 怎样处理在长时间运行的计算过程中丢失了一台服务器这种问题?尝试设计一种容错机制来避免重启计算这种解决方案?\n",
"\n",
"[Discussions](https://discuss.d2l.ai/t/5774)\n"
]
}
],
"metadata": {
"language_info": {
"name": "python"
},
"required_libs": []
},
"nbformat": 4,
"nbformat_minor": 5
}
File diff suppressed because one or more lines are too long
File diff suppressed because one or more lines are too long
File diff suppressed because one or more lines are too long
File diff suppressed because one or more lines are too long
File diff suppressed because one or more lines are too long
@@ -0,0 +1,51 @@
{
"cells": [
{
"cell_type": "markdown",
"id": "2858f745",
"metadata": {
"origin_pos": 0
},
"source": [
"# 计算机视觉\n",
":label:`chap_cv`\n",
"\n",
"近年来,深度学习一直是提高计算机视觉系统性能的变革力量。\n",
"无论是医疗诊断、自动驾驶,还是智能滤波器、摄像头监控,许多计算机视觉领域的应用都与我们当前和未来的生活密切相关。\n",
"可以说,最先进的计算机视觉应用与深度学习几乎是不可分割的。\n",
"有鉴于此,本章将重点介绍计算机视觉领域,并探讨最近在学术界和行业中具有影响力的方法和应用。\n",
"\n",
"在 :numref:`chap_cnn`和 :numref:`chap_modern_cnn`中,我们研究了计算机视觉中常用的各种卷积神经网络,并将它们应用到简单的图像分类任务中。\n",
"本章开头,我们将介绍两种可以改进模型泛化的方法,即*图像增广*和*微调*,并将它们应用于图像分类。\n",
"由于深度神经网络可以有效地表示多个层次的图像,因此这种分层表示已成功用于各种计算机视觉任务,例如*目标检测*(object detection)、*语义分割*semantic segmentation)和*样式迁移*style transfer)。\n",
"秉承计算机视觉中利用分层表示的关键思想,我们将从物体检测的主要组件和技术开始,继而展示如何使用*完全卷积网络*对图像进行语义分割,然后我们将解释如何使用样式迁移技术来生成像本书封面一样的图像。\n",
"最后在结束本章时,我们将本章和前几章的知识应用于两个流行的计算机视觉基准数据集。\n",
"\n",
":begin_tab:toc\n",
" - [image-augmentation](image-augmentation.ipynb)\n",
" - [fine-tuning](fine-tuning.ipynb)\n",
" - [bounding-box](bounding-box.ipynb)\n",
" - [anchor](anchor.ipynb)\n",
" - [multiscale-object-detection](multiscale-object-detection.ipynb)\n",
" - [object-detection-dataset](object-detection-dataset.ipynb)\n",
" - [ssd](ssd.ipynb)\n",
" - [rcnn](rcnn.ipynb)\n",
" - [semantic-segmentation-and-dataset](semantic-segmentation-and-dataset.ipynb)\n",
" - [transposed-conv](transposed-conv.ipynb)\n",
" - [fcn](fcn.ipynb)\n",
" - [neural-style](neural-style.ipynb)\n",
" - [kaggle-cifar10](kaggle-cifar10.ipynb)\n",
" - [kaggle-dog](kaggle-dog.ipynb)\n",
":end_tab:\n"
]
}
],
"metadata": {
"language_info": {
"name": "python"
},
"required_libs": []
},
"nbformat": 4,
"nbformat_minor": 5
}
File diff suppressed because it is too large Load Diff
File diff suppressed because it is too large Load Diff
File diff suppressed because one or more lines are too long
File diff suppressed because one or more lines are too long
File diff suppressed because one or more lines are too long
@@ -0,0 +1,267 @@
{
"cells": [
{
"cell_type": "markdown",
"id": "722f8846",
"metadata": {
"origin_pos": 0
},
"source": [
"# 区域卷积神经网络(R-CNN)系列\n",
":label:`sec_rcnn`\n",
"\n",
"除了 :numref:`sec_ssd`中描述的单发多框检测之外,\n",
"区域卷积神经网络(region-based CNN或regions with CNN featuresR-CNN :cite:`Girshick.Donahue.Darrell.ea.2014`也是将深度模型应用于目标检测的开创性工作之一。\n",
"本节将介绍R-CNN及其一系列改进方法:快速的R-CNNFast R-CNN :cite:`Girshick.2015`、更快的R-CNNFaster R-CNN :cite:`Ren.He.Girshick.ea.2015`和掩码R-CNNMask R-CNN :cite:`He.Gkioxari.Dollar.ea.2017`。\n",
"限于篇幅,我们只着重介绍这些模型的设计思路。\n",
"\n",
"## R-CNN\n",
"\n",
"*R-CNN*首先从输入图像中选取若干(例如2000个)*提议区域*(如锚框也是一种选取方法),并标注它们的类别和边界框(如偏移量)。 :cite:`Girshick.Donahue.Darrell.ea.2014`然后,用卷积神经网络对每个提议区域进行前向传播以抽取其特征。\n",
"接下来,我们用每个提议区域的特征来预测类别和边界框。\n",
"\n",
"![R-CNN模型](../img/r-cnn.svg)\n",
":label:`fig_r-cnn`\n",
"\n",
" :numref:`fig_r-cnn`展示了R-CNN模型。具体来说,R-CNN包括以下四个步骤:\n",
"\n",
"1. 对输入图像使用*选择性搜索*来选取多个高质量的提议区域 :cite:`Uijlings.Van-De-Sande.Gevers.ea.2013`。这些提议区域通常是在多个尺度下选取的,并具有不同的形状和大小。每个提议区域都将被标注类别和真实边界框;\n",
"1. 选择一个预训练的卷积神经网络,并将其在输出层之前截断。将每个提议区域变形为网络需要的输入尺寸,并通过前向传播输出抽取的提议区域特征;\n",
"1. 将每个提议区域的特征连同其标注的类别作为一个样本。训练多个支持向量机对目标分类,其中每个支持向量机用来判断样本是否属于某一个类别;\n",
"1. 将每个提议区域的特征连同其标注的边界框作为一个样本,训练线性回归模型来预测真实边界框。\n",
"\n",
"尽管R-CNN模型通过预训练的卷积神经网络有效地抽取了图像特征,但它的速度很慢。\n",
"想象一下,我们可能从一张图像中选出上千个提议区域,这需要上千次的卷积神经网络的前向传播来执行目标检测。\n",
"这种庞大的计算量使得R-CNN在现实世界中难以被广泛应用。\n",
"\n",
"## Fast R-CNN\n",
"\n",
"R-CNN的主要性能瓶颈在于,对每个提议区域,卷积神经网络的前向传播是独立的,而没有共享计算。\n",
"由于这些区域通常有重叠,独立的特征抽取会导致重复的计算。\n",
"*Fast R-CNN* :cite:`Girshick.2015`对R-CNN的主要改进之一,是仅在整张图象上执行卷积神经网络的前向传播。\n",
"\n",
"![Fast R-CNN模型](../img/fast-rcnn.svg)\n",
":label:`fig_fast_r-cnn`\n",
"\n",
" :numref:`fig_fast_r-cnn`中描述了Fast R-CNN模型。它的主要计算如下:\n",
"\n",
"1. 与R-CNN相比,Fast R-CNN用来提取特征的卷积神经网络的输入是整个图像,而不是各个提议区域。此外,这个网络通常会参与训练。设输入为一张图像,将卷积神经网络的输出的形状记为$1 \\times c \\times h_1 \\times w_1$\n",
"1. 假设选择性搜索生成了$n$个提议区域。这些形状各异的提议区域在卷积神经网络的输出上分别标出了形状各异的兴趣区域。然后,这些感兴趣的区域需要进一步抽取出形状相同的特征(比如指定高度$h_2$和宽度$w_2$),以便于连结后输出。为了实现这一目标,Fast R-CNN引入了*兴趣区域汇聚层*(RoI pooling):将卷积神经网络的输出和提议区域作为输入,输出连结后的各个提议区域抽取的特征,形状为$n \\times c \\times h_2 \\times w_2$\n",
"1. 通过全连接层将输出形状变换为$n \\times d$,其中超参数$d$取决于模型设计;\n",
"1. 预测$n$个提议区域中每个区域的类别和边界框。更具体地说,在预测类别和边界框时,将全连接层的输出分别转换为形状为$n \\times q$($q$是类别的数量)的输出和形状为$n \\times 4$的输出。其中预测类别时使用softmax回归。\n",
"\n",
"在Fast R-CNN中提出的兴趣区域汇聚层与 :numref:`sec_pooling`中介绍的汇聚层有所不同。在汇聚层中,我们通过设置汇聚窗口、填充和步幅的大小来间接控制输出形状。而兴趣区域汇聚层对每个区域的输出形状是可以直接指定的。\n",
"\n",
"例如,指定每个区域输出的高和宽分别为$h_2$和$w_2$。\n",
"对于任何形状为$h \\times w$的兴趣区域窗口,该窗口将被划分为$h_2 \\times w_2$子窗口网格,其中每个子窗口的大小约为$(h/h_2) \\times (w/w_2)$。\n",
"在实践中,任何子窗口的高度和宽度都应向上取整,其中的最大元素作为该子窗口的输出。\n",
"因此,兴趣区域汇聚层可从形状各异的兴趣区域中均抽取出形状相同的特征。\n",
"\n",
"作为说明性示例, :numref:`fig_roi`中提到,在$4 \\times 4$的输入中,我们选取了左上角$3\\times 3$的兴趣区域。\n",
"对于该兴趣区域,我们通过$2\\times 2$的兴趣区域汇聚层得到一个$2\\times 2$的输出。\n",
"请注意,四个划分后的子窗口中分别含有元素0、1、4、5(5最大);2、6(6最大);8、9(9最大);以及10。\n",
"\n",
"![一个 $2\\times 2$ 的兴趣区域汇聚层](../img/roi.svg)\n",
":label:`fig_roi`\n",
"\n",
"下面,我们演示了兴趣区域汇聚层的计算方法。\n",
"假设卷积神经网络抽取的特征`X`的高度和宽度都是4,且只有单通道。\n"
]
},
{
"cell_type": "code",
"execution_count": 1,
"id": "52b05409",
"metadata": {
"execution": {
"iopub.execute_input": "2023-08-18T07:03:10.049147Z",
"iopub.status.busy": "2023-08-18T07:03:10.048156Z",
"iopub.status.idle": "2023-08-18T07:03:11.581462Z",
"shell.execute_reply": "2023-08-18T07:03:11.580563Z"
},
"origin_pos": 2,
"tab": [
"pytorch"
]
},
"outputs": [
{
"data": {
"text/plain": [
"tensor([[[[ 0., 1., 2., 3.],\n",
" [ 4., 5., 6., 7.],\n",
" [ 8., 9., 10., 11.],\n",
" [12., 13., 14., 15.]]]])"
]
},
"execution_count": 1,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"import torch\n",
"import torchvision\n",
"\n",
"X = torch.arange(16.).reshape(1, 1, 4, 4)\n",
"X"
]
},
{
"cell_type": "markdown",
"id": "c5c9da14",
"metadata": {
"origin_pos": 4
},
"source": [
"让我们进一步假设输入图像的高度和宽度都是40像素,且选择性搜索在此图像上生成了两个提议区域。\n",
"每个区域由5个元素表示:区域目标类别、左上角和右下角的$(x, y)$坐标。\n"
]
},
{
"cell_type": "code",
"execution_count": 2,
"id": "d5f4463d",
"metadata": {
"execution": {
"iopub.execute_input": "2023-08-18T07:03:11.585300Z",
"iopub.status.busy": "2023-08-18T07:03:11.584758Z",
"iopub.status.idle": "2023-08-18T07:03:11.589192Z",
"shell.execute_reply": "2023-08-18T07:03:11.588365Z"
},
"origin_pos": 6,
"tab": [
"pytorch"
]
},
"outputs": [],
"source": [
"rois = torch.Tensor([[0, 0, 0, 20, 20], [0, 0, 10, 30, 30]])"
]
},
{
"cell_type": "markdown",
"id": "dad0e007",
"metadata": {
"origin_pos": 8
},
"source": [
"由于`X`的高和宽是输入图像高和宽的$1/10$,因此,两个提议区域的坐标先按`spatial_scale`乘以0.1。\n",
"然后,在`X`上分别标出这两个兴趣区域`X[:, :, 0:3, 0:3]`和`X[:, :, 1:4, 0:4]`。\n",
"最后,在$2\\times 2$的兴趣区域汇聚层中,每个兴趣区域被划分为子窗口网格,并进一步抽取相同形状$2\\times 2$的特征。\n"
]
},
{
"cell_type": "code",
"execution_count": 3,
"id": "9c4ab6ca",
"metadata": {
"execution": {
"iopub.execute_input": "2023-08-18T07:03:11.592473Z",
"iopub.status.busy": "2023-08-18T07:03:11.592023Z",
"iopub.status.idle": "2023-08-18T07:03:11.598392Z",
"shell.execute_reply": "2023-08-18T07:03:11.597591Z"
},
"origin_pos": 10,
"tab": [
"pytorch"
]
},
"outputs": [
{
"data": {
"text/plain": [
"tensor([[[[ 5., 6.],\n",
" [ 9., 10.]]],\n",
"\n",
"\n",
" [[[ 9., 11.],\n",
" [13., 15.]]]])"
]
},
"execution_count": 3,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"torchvision.ops.roi_pool(X, rois, output_size=(2, 2), spatial_scale=0.1)"
]
},
{
"cell_type": "markdown",
"id": "6eb55aab",
"metadata": {
"origin_pos": 12
},
"source": [
"## Faster R-CNN\n",
"\n",
"为了较精确地检测目标结果,Fast R-CNN模型通常需要在选择性搜索中生成大量的提议区域。\n",
"*Faster R-CNN* :cite:`Ren.He.Girshick.ea.2015`提出将选择性搜索替换为*区域提议网络*region proposal network),从而减少提议区域的生成数量,并保证目标检测的精度。\n",
"\n",
"![Faster R-CNN 模型](../img/faster-rcnn.svg)\n",
":label:`fig_faster_r-cnn`\n",
"\n",
" :numref:`fig_faster_r-cnn`描述了Faster R-CNN模型。\n",
"与Fast R-CNN相比,Faster R-CNN只将生成提议区域的方法从选择性搜索改为了区域提议网络,模型的其余部分保持不变。具体来说,区域提议网络的计算步骤如下:\n",
"\n",
"1. 使用填充为1的$3\\times 3$的卷积层变换卷积神经网络的输出,并将输出通道数记为$c$。这样,卷积神经网络为图像抽取的特征图中的每个单元均得到一个长度为$c$的新特征。\n",
"1. 以特征图的每个像素为中心,生成多个不同大小和宽高比的锚框并标注它们。\n",
"1. 使用锚框中心单元长度为$c$的特征,分别预测该锚框的二元类别(含目标还是背景)和边界框。\n",
"1. 使用非极大值抑制,从预测类别为目标的预测边界框中移除相似的结果。最终输出的预测边界框即是兴趣区域汇聚层所需的提议区域。\n",
"\n",
"值得一提的是,区域提议网络作为Faster R-CNN模型的一部分,是和整个模型一起训练得到的。\n",
"换句话说,Faster R-CNN的目标函数不仅包括目标检测中的类别和边界框预测,还包括区域提议网络中锚框的二元类别和边界框预测。\n",
"作为端到端训练的结果,区域提议网络能够学习到如何生成高质量的提议区域,从而在减少了从数据中学习的提议区域的数量的情况下,仍保持目标检测的精度。\n",
"\n",
"## Mask R-CNN\n",
"\n",
"如果在训练集中还标注了每个目标在图像上的像素级位置,那么*Mask R-CNN* :cite:`He.Gkioxari.Dollar.ea.2017`能够有效地利用这些详尽的标注信息进一步提升目标检测的精度。\n",
"\n",
"![Mask R-CNN 模型](../img/mask-rcnn.svg)\n",
":label:`fig_mask_r-cnn`\n",
"\n",
"如 :numref:`fig_mask_r-cnn`所示,Mask R-CNN是基于Faster R-CNN修改而来的。\n",
"具体来说,Mask R-CNN将兴趣区域汇聚层替换为了\n",
"*兴趣区域对齐*层,使用*双线性插值*bilinear interpolation)来保留特征图上的空间信息,从而更适于像素级预测。\n",
"兴趣区域对齐层的输出包含了所有与兴趣区域的形状相同的特征图。\n",
"它们不仅被用于预测每个兴趣区域的类别和边界框,还通过额外的全卷积网络预测目标的像素级位置。\n",
"本章的后续章节将更详细地介绍如何使用全卷积网络预测图像中像素级的语义。\n",
"\n",
"## 小结\n",
"\n",
"* R-CNN对图像选取若干提议区域,使用卷积神经网络对每个提议区域执行前向传播以抽取其特征,然后再用这些特征来预测提议区域的类别和边界框。\n",
"* Fast R-CNN对R-CNN的一个主要改进:只对整个图像做卷积神经网络的前向传播。它还引入了兴趣区域汇聚层,从而为具有不同形状的兴趣区域抽取相同形状的特征。\n",
"* Faster R-CNN将Fast R-CNN中使用的选择性搜索替换为参与训练的区域提议网络,这样后者可以在减少提议区域数量的情况下仍保证目标检测的精度。\n",
"* Mask R-CNN在Faster R-CNN的基础上引入了一个全卷积网络,从而借助目标的像素级位置进一步提升目标检测的精度。\n",
"\n",
"## 练习\n",
"\n",
"1. 我们能否将目标检测视为回归问题(例如预测边界框和类别的概率)?可以参考YOLO模型 :cite:`Redmon.Divvala.Girshick.ea.2016`的设计。\n",
"1. 将单发多框检测与本节介绍的方法进行比较。他们的主要区别是什么?可以参考 :cite:`Zhao.Zheng.Xu.ea.2019`中的图2。\n"
]
},
{
"cell_type": "markdown",
"id": "7d4eaf53",
"metadata": {
"origin_pos": 14,
"tab": [
"pytorch"
]
},
"source": [
"[讨论区](https://discuss.d2l.ai/t/3207)\n"
]
}
],
"metadata": {
"language_info": {
"name": "python"
},
"required_libs": []
},
"nbformat": 4,
"nbformat_minor": 5
}
File diff suppressed because one or more lines are too long
File diff suppressed because one or more lines are too long
@@ -0,0 +1,6 @@
id,label
1,airplane
2,deer
3,horse
4,frog
5,cat
1 id label
2 1 airplane
3 2 deer
4 3 horse
5 4 frog
6 5 cat
@@ -0,0 +1,584 @@
{
"cells": [
{
"cell_type": "markdown",
"id": "8082691a",
"metadata": {
"origin_pos": 0
},
"source": [
"# 转置卷积\n",
":label:`sec_transposed_conv`\n",
"\n",
"到目前为止,我们所见到的卷积神经网络层,例如卷积层( :numref:`sec_conv_layer`)和汇聚层( :numref:`sec_pooling`),通常会减少下采样输入图像的空间维度(高和宽)。\n",
"然而如果输入和输出图像的空间维度相同,在以像素级分类的语义分割中将会很方便。\n",
"例如,输出像素所处的通道维可以保有输入像素在同一位置上的分类结果。\n",
"\n",
"为了实现这一点,尤其是在空间维度被卷积神经网络层缩小后,我们可以使用另一种类型的卷积神经网络层,它可以增加上采样中间层特征图的空间维度。\n",
"本节将介绍\n",
"*转置卷积*transposed convolution :cite:`Dumoulin.Visin.2016`\n",
"用于逆转下采样导致的空间尺寸减小。\n"
]
},
{
"cell_type": "code",
"execution_count": 1,
"id": "1f39b5ef",
"metadata": {
"execution": {
"iopub.execute_input": "2023-08-18T07:05:22.451701Z",
"iopub.status.busy": "2023-08-18T07:05:22.451411Z",
"iopub.status.idle": "2023-08-18T07:05:24.490785Z",
"shell.execute_reply": "2023-08-18T07:05:24.489970Z"
},
"origin_pos": 2,
"tab": [
"pytorch"
]
},
"outputs": [],
"source": [
"import torch\n",
"from torch import nn\n",
"from d2l import torch as d2l"
]
},
{
"cell_type": "markdown",
"id": "f1007d54",
"metadata": {
"origin_pos": 4
},
"source": [
"## 基本操作\n",
"\n",
"让我们暂时忽略通道,从基本的转置卷积开始,设步幅为1且没有填充。\n",
"假设我们有一个$n_h \\times n_w$的输入张量和一个$k_h \\times k_w$的卷积核。\n",
"以步幅为1滑动卷积核窗口,每行$n_w$次,每列$n_h$次,共产生$n_h n_w$个中间结果。\n",
"每个中间结果都是一个$(n_h + k_h - 1) \\times (n_w + k_w - 1)$的张量,初始化为0。\n",
"为了计算每个中间张量,输入张量中的每个元素都要乘以卷积核,从而使所得的$k_h \\times k_w$张量替换中间张量的一部分。\n",
"请注意,每个中间张量被替换部分的位置与输入张量中元素的位置相对应。\n",
"最后,所有中间结果相加以获得最终结果。\n",
"\n",
"例如, :numref:`fig_trans_conv`解释了如何为$2\\times 2$的输入张量计算卷积核为$2\\times 2$的转置卷积。\n",
"\n",
"![卷积核为 $2\\times 2$ 的转置卷积。阴影部分是中间张量的一部分,也是用于计算的输入和卷积核张量元素。 ](../img/trans_conv.svg)\n",
":label:`fig_trans_conv`\n",
"\n",
"我们可以对输入矩阵`X`和卷积核矩阵`K`(**实现基本的转置卷积运算**)`trans_conv`。\n"
]
},
{
"cell_type": "code",
"execution_count": 2,
"id": "e6931d90",
"metadata": {
"execution": {
"iopub.execute_input": "2023-08-18T07:05:24.494981Z",
"iopub.status.busy": "2023-08-18T07:05:24.494307Z",
"iopub.status.idle": "2023-08-18T07:05:24.499745Z",
"shell.execute_reply": "2023-08-18T07:05:24.498885Z"
},
"origin_pos": 5,
"tab": [
"pytorch"
]
},
"outputs": [],
"source": [
"def trans_conv(X, K):\n",
" h, w = K.shape\n",
" Y = torch.zeros((X.shape[0] + h - 1, X.shape[1] + w - 1))\n",
" for i in range(X.shape[0]):\n",
" for j in range(X.shape[1]):\n",
" Y[i: i + h, j: j + w] += X[i, j] * K\n",
" return Y"
]
},
{
"cell_type": "markdown",
"id": "6d64431b",
"metadata": {
"origin_pos": 6
},
"source": [
"与通过卷积核“减少”输入元素的常规卷积(在 :numref:`sec_conv_layer`中)相比,转置卷积通过卷积核“广播”输入元素,从而产生大于输入的输出。\n",
"我们可以通过 :numref:`fig_trans_conv`来构建输入张量`X`和卷积核张量`K`从而[**验证上述实现输出**]。\n",
"此实现是基本的二维转置卷积运算。\n"
]
},
{
"cell_type": "code",
"execution_count": 3,
"id": "a7c6e2fd",
"metadata": {
"execution": {
"iopub.execute_input": "2023-08-18T07:05:24.503202Z",
"iopub.status.busy": "2023-08-18T07:05:24.502646Z",
"iopub.status.idle": "2023-08-18T07:05:24.531448Z",
"shell.execute_reply": "2023-08-18T07:05:24.530730Z"
},
"origin_pos": 7,
"tab": [
"pytorch"
]
},
"outputs": [
{
"data": {
"text/plain": [
"tensor([[ 0., 0., 1.],\n",
" [ 0., 4., 6.],\n",
" [ 4., 12., 9.]])"
]
},
"execution_count": 3,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"X = torch.tensor([[0.0, 1.0], [2.0, 3.0]])\n",
"K = torch.tensor([[0.0, 1.0], [2.0, 3.0]])\n",
"trans_conv(X, K)"
]
},
{
"cell_type": "markdown",
"id": "c6698e0d",
"metadata": {
"origin_pos": 8
},
"source": [
"或者,当输入`X`和卷积核`K`都是四维张量时,我们可以[**使用高级API获得相同的结果**]。\n"
]
},
{
"cell_type": "code",
"execution_count": 4,
"id": "b9de6d80",
"metadata": {
"execution": {
"iopub.execute_input": "2023-08-18T07:05:24.535386Z",
"iopub.status.busy": "2023-08-18T07:05:24.534826Z",
"iopub.status.idle": "2023-08-18T07:05:24.544484Z",
"shell.execute_reply": "2023-08-18T07:05:24.543747Z"
},
"origin_pos": 10,
"tab": [
"pytorch"
]
},
"outputs": [
{
"data": {
"text/plain": [
"tensor([[[[ 0., 0., 1.],\n",
" [ 0., 4., 6.],\n",
" [ 4., 12., 9.]]]], grad_fn=<ConvolutionBackward0>)"
]
},
"execution_count": 4,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"X, K = X.reshape(1, 1, 2, 2), K.reshape(1, 1, 2, 2)\n",
"tconv = nn.ConvTranspose2d(1, 1, kernel_size=2, bias=False)\n",
"tconv.weight.data = K\n",
"tconv(X)"
]
},
{
"cell_type": "markdown",
"id": "80936d2e",
"metadata": {
"origin_pos": 12
},
"source": [
"## [**填充、步幅和多通道**]\n",
"\n",
"与常规卷积不同,在转置卷积中,填充被应用于的输出(常规卷积将填充应用于输入)。\n",
"例如,当将高和宽两侧的填充数指定为1时,转置卷积的输出中将删除第一和最后的行与列。\n"
]
},
{
"cell_type": "code",
"execution_count": 5,
"id": "cd114de1",
"metadata": {
"execution": {
"iopub.execute_input": "2023-08-18T07:05:24.548040Z",
"iopub.status.busy": "2023-08-18T07:05:24.547398Z",
"iopub.status.idle": "2023-08-18T07:05:24.553659Z",
"shell.execute_reply": "2023-08-18T07:05:24.552864Z"
},
"origin_pos": 14,
"tab": [
"pytorch"
]
},
"outputs": [
{
"data": {
"text/plain": [
"tensor([[[[4.]]]], grad_fn=<ConvolutionBackward0>)"
]
},
"execution_count": 5,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"tconv = nn.ConvTranspose2d(1, 1, kernel_size=2, padding=1, bias=False)\n",
"tconv.weight.data = K\n",
"tconv(X)"
]
},
{
"cell_type": "markdown",
"id": "22272c8b",
"metadata": {
"origin_pos": 16
},
"source": [
"在转置卷积中,步幅被指定为中间结果(输出),而不是输入。\n",
"使用 :numref:`fig_trans_conv`中相同输入和卷积核张量,将步幅从1更改为2会增加中间张量的高和权重,因此输出张量在 :numref:`fig_trans_conv_stride2`中。\n",
"\n",
"![卷积核为$2\\times 2$,步幅为2的转置卷积。阴影部分是中间张量的一部分,也是用于计算的输入和卷积核张量元素。](../img/trans_conv_stride2.svg)\n",
":label:`fig_trans_conv_stride2`\n",
"\n",
"以下代码可以验证 :numref:`fig_trans_conv_stride2`中步幅为2的转置卷积的输出。\n"
]
},
{
"cell_type": "code",
"execution_count": 6,
"id": "48064406",
"metadata": {
"execution": {
"iopub.execute_input": "2023-08-18T07:05:24.557362Z",
"iopub.status.busy": "2023-08-18T07:05:24.556727Z",
"iopub.status.idle": "2023-08-18T07:05:24.563081Z",
"shell.execute_reply": "2023-08-18T07:05:24.562365Z"
},
"origin_pos": 18,
"tab": [
"pytorch"
]
},
"outputs": [
{
"data": {
"text/plain": [
"tensor([[[[0., 0., 0., 1.],\n",
" [0., 0., 2., 3.],\n",
" [0., 2., 0., 3.],\n",
" [4., 6., 6., 9.]]]], grad_fn=<ConvolutionBackward0>)"
]
},
"execution_count": 6,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"tconv = nn.ConvTranspose2d(1, 1, kernel_size=2, stride=2, bias=False)\n",
"tconv.weight.data = K\n",
"tconv(X)"
]
},
{
"cell_type": "markdown",
"id": "79ac62fd",
"metadata": {
"origin_pos": 20
},
"source": [
"对于多个输入和输出通道,转置卷积与常规卷积以相同方式运作。\n",
"假设输入有$c_i$个通道,且转置卷积为每个输入通道分配了一个$k_h\\times k_w$的卷积核张量。\n",
"当指定多个输出通道时,每个输出通道将有一个$c_i\\times k_h\\times k_w$的卷积核。\n",
"\n",
"同样,如果我们将$\\mathsf{X}$代入卷积层$f$来输出$\\mathsf{Y}=f(\\mathsf{X})$,并创建一个与$f$具有相同的超参数、但输出通道数量是$\\mathsf{X}$中通道数的转置卷积层$g$,那么$g(Y)$的形状将与$\\mathsf{X}$相同。\n",
"下面的示例可以解释这一点。\n"
]
},
{
"cell_type": "code",
"execution_count": 7,
"id": "5e7033d7",
"metadata": {
"execution": {
"iopub.execute_input": "2023-08-18T07:05:24.566613Z",
"iopub.status.busy": "2023-08-18T07:05:24.565990Z",
"iopub.status.idle": "2023-08-18T07:05:24.577437Z",
"shell.execute_reply": "2023-08-18T07:05:24.576434Z"
},
"origin_pos": 22,
"tab": [
"pytorch"
]
},
"outputs": [
{
"data": {
"text/plain": [
"True"
]
},
"execution_count": 7,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"X = torch.rand(size=(1, 10, 16, 16))\n",
"conv = nn.Conv2d(10, 20, kernel_size=5, padding=2, stride=3)\n",
"tconv = nn.ConvTranspose2d(20, 10, kernel_size=5, padding=2, stride=3)\n",
"tconv(conv(X)).shape == X.shape"
]
},
{
"cell_type": "markdown",
"id": "9908cdc8",
"metadata": {
"origin_pos": 24
},
"source": [
"## [**与矩阵变换的联系**]\n",
":label:`subsec-connection-to-mat-transposition`\n",
"\n",
"转置卷积为何以矩阵变换命名呢?\n",
"让我们首先看看如何使用矩阵乘法来实现卷积。\n",
"在下面的示例中,我们定义了一个$3\\times 3$的输入`X`和$2\\times 2$卷积核`K`,然后使用`corr2d`函数计算卷积输出`Y`。\n"
]
},
{
"cell_type": "code",
"execution_count": 8,
"id": "260d5c6d",
"metadata": {
"execution": {
"iopub.execute_input": "2023-08-18T07:05:24.581485Z",
"iopub.status.busy": "2023-08-18T07:05:24.580866Z",
"iopub.status.idle": "2023-08-18T07:05:24.589179Z",
"shell.execute_reply": "2023-08-18T07:05:24.588233Z"
},
"origin_pos": 25,
"tab": [
"pytorch"
]
},
"outputs": [
{
"data": {
"text/plain": [
"tensor([[27., 37.],\n",
" [57., 67.]])"
]
},
"execution_count": 8,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"X = torch.arange(9.0).reshape(3, 3)\n",
"K = torch.tensor([[1.0, 2.0], [3.0, 4.0]])\n",
"Y = d2l.corr2d(X, K)\n",
"Y"
]
},
{
"cell_type": "markdown",
"id": "d5cb87b2",
"metadata": {
"origin_pos": 27
},
"source": [
"接下来,我们将卷积核`K`重写为包含大量0的稀疏权重矩阵`W`。\n",
"权重矩阵的形状是($4$,$9$),其中非0元素来自卷积核`K`。\n"
]
},
{
"cell_type": "code",
"execution_count": 9,
"id": "d9f6ce2b",
"metadata": {
"execution": {
"iopub.execute_input": "2023-08-18T07:05:24.592769Z",
"iopub.status.busy": "2023-08-18T07:05:24.592164Z",
"iopub.status.idle": "2023-08-18T07:05:24.602392Z",
"shell.execute_reply": "2023-08-18T07:05:24.601439Z"
},
"origin_pos": 28,
"tab": [
"pytorch"
]
},
"outputs": [
{
"data": {
"text/plain": [
"tensor([[1., 2., 0., 3., 4., 0., 0., 0., 0.],\n",
" [0., 1., 2., 0., 3., 4., 0., 0., 0.],\n",
" [0., 0., 0., 1., 2., 0., 3., 4., 0.],\n",
" [0., 0., 0., 0., 1., 2., 0., 3., 4.]])"
]
},
"execution_count": 9,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"def kernel2matrix(K):\n",
" k, W = torch.zeros(5), torch.zeros((4, 9))\n",
" k[:2], k[3:5] = K[0, :], K[1, :]\n",
" W[0, :5], W[1, 1:6], W[2, 3:8], W[3, 4:] = k, k, k, k\n",
" return W\n",
"\n",
"W = kernel2matrix(K)\n",
"W"
]
},
{
"cell_type": "markdown",
"id": "12f9b037",
"metadata": {
"origin_pos": 30
},
"source": [
"逐行连结输入`X`,获得了一个长度为9的矢量。\n",
"然后,`W`的矩阵乘法和向量化的`X`给出了一个长度为4的向量。\n",
"重塑它之后,可以获得与上面的原始卷积操作所得相同的结果`Y`:我们刚刚使用矩阵乘法实现了卷积。\n"
]
},
{
"cell_type": "code",
"execution_count": 10,
"id": "1fb803d0",
"metadata": {
"execution": {
"iopub.execute_input": "2023-08-18T07:05:24.606249Z",
"iopub.status.busy": "2023-08-18T07:05:24.605496Z",
"iopub.status.idle": "2023-08-18T07:05:24.612872Z",
"shell.execute_reply": "2023-08-18T07:05:24.611900Z"
},
"origin_pos": 31,
"tab": [
"pytorch"
]
},
"outputs": [
{
"data": {
"text/plain": [
"tensor([[True, True],\n",
" [True, True]])"
]
},
"execution_count": 10,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"Y == torch.matmul(W, X.reshape(-1)).reshape(2, 2)"
]
},
{
"cell_type": "markdown",
"id": "27394a2c",
"metadata": {
"origin_pos": 33
},
"source": [
"同样,我们可以使用矩阵乘法来实现转置卷积。\n",
"在下面的示例中,我们将上面的常规卷积$2 \\times 2$的输出`Y`作为转置卷积的输入。\n",
"想要通过矩阵相乘来实现它,我们只需要将权重矩阵`W`的形状转置为$(9, 4)$。\n"
]
},
{
"cell_type": "code",
"execution_count": 11,
"id": "f1a55ff1",
"metadata": {
"execution": {
"iopub.execute_input": "2023-08-18T07:05:24.616575Z",
"iopub.status.busy": "2023-08-18T07:05:24.615826Z",
"iopub.status.idle": "2023-08-18T07:05:24.623063Z",
"shell.execute_reply": "2023-08-18T07:05:24.622144Z"
},
"origin_pos": 34,
"tab": [
"pytorch"
]
},
"outputs": [
{
"data": {
"text/plain": [
"tensor([[True, True, True],\n",
" [True, True, True],\n",
" [True, True, True]])"
]
},
"execution_count": 11,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"Z = trans_conv(Y, K)\n",
"Z == torch.matmul(W.T, Y.reshape(-1)).reshape(3, 3)"
]
},
{
"cell_type": "markdown",
"id": "9614cf7b",
"metadata": {
"origin_pos": 36
},
"source": [
"抽象来看,给定输入向量$\\mathbf{x}$和权重矩阵$\\mathbf{W}$,卷积的前向传播函数可以通过将其输入与权重矩阵相乘并输出向量$\\mathbf{y}=\\mathbf{W}\\mathbf{x}$来实现。\n",
"由于反向传播遵循链式法则和$\\nabla_{\\mathbf{x}}\\mathbf{y}=\\mathbf{W}^\\top$,卷积的反向传播函数可以通过将其输入与转置的权重矩阵$\\mathbf{W}^\\top$相乘来实现。\n",
"因此,转置卷积层能够交换卷积层的正向传播函数和反向传播函数:它的正向传播和反向传播函数将输入向量分别与$\\mathbf{W}^\\top$和$\\mathbf{W}$相乘。\n",
"\n",
"## 小结\n",
"\n",
"* 与通过卷积核减少输入元素的常规卷积相反,转置卷积通过卷积核广播输入元素,从而产生形状大于输入的输出。\n",
"* 如果我们将$\\mathsf{X}$输入卷积层$f$来获得输出$\\mathsf{Y}=f(\\mathsf{X})$并创造一个与$f$有相同的超参数、但输出通道数是$\\mathsf{X}$中通道数的转置卷积层$g$,那么$g(Y)$的形状将与$\\mathsf{X}$相同。\n",
"* 我们可以使用矩阵乘法来实现卷积。转置卷积层能够交换卷积层的正向传播函数和反向传播函数。\n",
"\n",
"## 练习\n",
"\n",
"1. 在 :numref:`subsec-connection-to-mat-transposition`中,卷积输入`X`和转置的卷积输出`Z`具有相同的形状。他们的数值也相同吗?为什么?\n",
"1. 使用矩阵乘法来实现卷积是否有效率?为什么?\n"
]
},
{
"cell_type": "markdown",
"id": "bcd86378",
"metadata": {
"origin_pos": 38,
"tab": [
"pytorch"
]
},
"source": [
"[Discussions](https://discuss.d2l.ai/t/3302)\n"
]
}
],
"metadata": {
"language_info": {
"name": "python"
},
"required_libs": []
},
"nbformat": 4,
"nbformat_minor": 5
}
File diff suppressed because it is too large Load Diff
File diff suppressed because it is too large Load Diff
File diff suppressed because it is too large Load Diff
File diff suppressed because it is too large Load Diff
@@ -0,0 +1,50 @@
{
"cells": [
{
"cell_type": "markdown",
"id": "048ad798",
"metadata": {
"origin_pos": 0
},
"source": [
"# 现代卷积神经网络\n",
":label:`chap_modern_cnn`\n",
"\n",
"上一章我们介绍了卷积神经网络的基本原理,本章将介绍现代的卷积神经网络架构,许多现代卷积神经网络的研究都是建立在这一章的基础上的。\n",
"在本章中的每一个模型都曾一度占据主导地位,其中许多模型都是ImageNet竞赛的优胜者。ImageNet竞赛自2010年以来,一直是计算机视觉中监督学习进展的指向标。\n",
"\n",
"这些模型包括:\n",
"\n",
"- AlexNet。它是第一个在大规模视觉竞赛中击败传统计算机视觉模型的大型神经网络;\n",
"- 使用重复块的网络(VGG)。它利用许多重复的神经网络块;\n",
"- 网络中的网络(NiN)。它重复使用由卷积层和$1\\times 1$卷积层(用来代替全连接层)来构建深层网络;\n",
"- 含并行连结的网络(GoogLeNet)。它使用并行连结的网络,通过不同窗口大小的卷积层和最大汇聚层来并行抽取信息;\n",
"- 残差网络(ResNet)。它通过残差块构建跨层的数据通道,是计算机视觉中最流行的体系架构;\n",
"- 稠密连接网络(DenseNet)。它的计算成本很高,但给我们带来了更好的效果。\n",
"\n",
"虽然深度神经网络的概念非常简单——将神经网络堆叠在一起。但由于不同的网络架构和超参数选择,这些神经网络的性能会发生很大变化。\n",
"本章介绍的神经网络是将人类直觉和相关数学见解结合后,经过大量研究试错后的结晶。\n",
"我们会按时间顺序介绍这些模型,在追寻历史的脉络的同时,帮助培养对该领域发展的直觉。这将有助于研究开发自己的架构。\n",
"例如,本章介绍的批量规范化(batch normalization)和残差网络(ResNet)为设计和训练深度神经网络提供了重要思想指导。\n",
"\n",
":begin_tab:toc\n",
" - [alexnet](alexnet.ipynb)\n",
" - [vgg](vgg.ipynb)\n",
" - [nin](nin.ipynb)\n",
" - [googlenet](googlenet.ipynb)\n",
" - [batch-norm](batch-norm.ipynb)\n",
" - [resnet](resnet.ipynb)\n",
" - [densenet](densenet.ipynb)\n",
":end_tab:\n"
]
}
],
"metadata": {
"language_info": {
"name": "python"
},
"required_libs": []
},
"nbformat": 4,
"nbformat_minor": 5
}
File diff suppressed because it is too large Load Diff
File diff suppressed because it is too large Load Diff
File diff suppressed because it is too large Load Diff
@@ -0,0 +1,424 @@
{
"cells": [
{
"cell_type": "markdown",
"id": "dda65809",
"metadata": {
"origin_pos": 0
},
"source": [
"# 多输入多输出通道\n",
":label:`sec_channels`\n",
"\n",
"虽然我们在 :numref:`subsec_why-conv-channels`中描述了构成每个图像的多个通道和多层卷积层。例如彩色图像具有标准的RGB通道来代表红、绿和蓝。\n",
"但是到目前为止,我们仅展示了单个输入和单个输出通道的简化例子。\n",
"这使得我们可以将输入、卷积核和输出看作二维张量。\n",
"\n",
"当我们添加通道时,我们的输入和隐藏的表示都变成了三维张量。例如,每个RGB输入图像具有$3\\times h\\times w$的形状。我们将这个大小为$3$的轴称为*通道*(channel)维度。本节将更深入地研究具有多输入和多输出通道的卷积核。\n",
"\n",
"## 多输入通道\n",
"\n",
"当输入包含多个通道时,需要构造一个与输入数据具有相同输入通道数的卷积核,以便与输入数据进行互相关运算。假设输入的通道数为$c_i$,那么卷积核的输入通道数也需要为$c_i$。如果卷积核的窗口形状是$k_h\\times k_w$,那么当$c_i=1$时,我们可以把卷积核看作形状为$k_h\\times k_w$的二维张量。\n",
"\n",
"然而,当$c_i>1$时,我们卷积核的每个输入通道将包含形状为$k_h\\times k_w$的张量。将这些张量$c_i$连结在一起可以得到形状为$c_i\\times k_h\\times k_w$的卷积核。由于输入和卷积核都有$c_i$个通道,我们可以对每个通道输入的二维张量和卷积核的二维张量进行互相关运算,再对通道求和(将$c_i$的结果相加)得到二维张量。这是多通道输入和多输入通道卷积核之间进行二维互相关运算的结果。\n",
"\n",
"在 :numref:`fig_conv_multi_in`中,我们演示了一个具有两个输入通道的二维互相关运算的示例。阴影部分是第一个输出元素以及用于计算这个输出的输入和核张量元素:$(1\\times1+2\\times2+4\\times3+5\\times4)+(0\\times0+1\\times1+3\\times2+4\\times3)=56$。\n",
"\n",
"![两个输入通道的互相关计算。](../img/conv-multi-in.svg)\n",
":label:`fig_conv_multi_in`\n",
"\n",
"为了加深理解,我们(**实现一下多输入通道互相关运算**)。\n",
"简而言之,我们所做的就是对每个通道执行互相关操作,然后将结果相加。\n"
]
},
{
"cell_type": "code",
"execution_count": 1,
"id": "412ea0b9",
"metadata": {
"execution": {
"iopub.execute_input": "2023-08-18T07:02:36.340241Z",
"iopub.status.busy": "2023-08-18T07:02:36.339505Z",
"iopub.status.idle": "2023-08-18T07:02:38.335558Z",
"shell.execute_reply": "2023-08-18T07:02:38.334349Z"
},
"origin_pos": 2,
"tab": [
"pytorch"
]
},
"outputs": [],
"source": [
"import torch\n",
"from d2l import torch as d2l"
]
},
{
"cell_type": "code",
"execution_count": 2,
"id": "0cff24d4",
"metadata": {
"execution": {
"iopub.execute_input": "2023-08-18T07:02:38.339612Z",
"iopub.status.busy": "2023-08-18T07:02:38.339031Z",
"iopub.status.idle": "2023-08-18T07:02:38.344485Z",
"shell.execute_reply": "2023-08-18T07:02:38.343326Z"
},
"origin_pos": 4,
"tab": [
"pytorch"
]
},
"outputs": [],
"source": [
"def corr2d_multi_in(X, K):\n",
" # 先遍历“X”和“K”的第0个维度(通道维度),再把它们加在一起\n",
" return sum(d2l.corr2d(x, k) for x, k in zip(X, K))"
]
},
{
"cell_type": "markdown",
"id": "54507b8a",
"metadata": {
"origin_pos": 6
},
"source": [
"我们可以构造与 :numref:`fig_conv_multi_in`中的值相对应的输入张量`X`和核张量`K`,以(**验证互相关运算的输出**)。\n"
]
},
{
"cell_type": "code",
"execution_count": 3,
"id": "5a60b8f9",
"metadata": {
"execution": {
"iopub.execute_input": "2023-08-18T07:02:38.347937Z",
"iopub.status.busy": "2023-08-18T07:02:38.347463Z",
"iopub.status.idle": "2023-08-18T07:02:38.380997Z",
"shell.execute_reply": "2023-08-18T07:02:38.379885Z"
},
"origin_pos": 7,
"tab": [
"pytorch"
]
},
"outputs": [
{
"data": {
"text/plain": [
"tensor([[ 56., 72.],\n",
" [104., 120.]])"
]
},
"execution_count": 3,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"X = torch.tensor([[[0.0, 1.0, 2.0], [3.0, 4.0, 5.0], [6.0, 7.0, 8.0]],\n",
" [[1.0, 2.0, 3.0], [4.0, 5.0, 6.0], [7.0, 8.0, 9.0]]])\n",
"K = torch.tensor([[[0.0, 1.0], [2.0, 3.0]], [[1.0, 2.0], [3.0, 4.0]]])\n",
"\n",
"corr2d_multi_in(X, K)"
]
},
{
"cell_type": "markdown",
"id": "118648d7",
"metadata": {
"origin_pos": 8
},
"source": [
"## 多输出通道\n",
"\n",
"到目前为止,不论有多少输入通道,我们还只有一个输出通道。然而,正如我们在 :numref:`subsec_why-conv-channels`中所讨论的,每一层有多个输出通道是至关重要的。在最流行的神经网络架构中,随着神经网络层数的加深,我们常会增加输出通道的维数,通过减少空间分辨率以获得更大的通道深度。直观地说,我们可以将每个通道看作对不同特征的响应。而现实可能更为复杂一些,因为每个通道不是独立学习的,而是为了共同使用而优化的。因此,多输出通道并不仅是学习多个单通道的检测器。\n",
"\n",
"用$c_i$和$c_o$分别表示输入和输出通道的数目,并让$k_h$和$k_w$为卷积核的高度和宽度。为了获得多个通道的输出,我们可以为每个输出通道创建一个形状为$c_i\\times k_h\\times k_w$的卷积核张量,这样卷积核的形状是$c_o\\times c_i\\times k_h\\times k_w$。在互相关运算中,每个输出通道先获取所有输入通道,再以对应该输出通道的卷积核计算出结果。\n",
"\n",
"如下所示,我们实现一个[**计算多个通道的输出的互相关函数**]。\n"
]
},
{
"cell_type": "code",
"execution_count": 4,
"id": "aa2e4e5f",
"metadata": {
"execution": {
"iopub.execute_input": "2023-08-18T07:02:38.384845Z",
"iopub.status.busy": "2023-08-18T07:02:38.384104Z",
"iopub.status.idle": "2023-08-18T07:02:38.389279Z",
"shell.execute_reply": "2023-08-18T07:02:38.388126Z"
},
"origin_pos": 9,
"tab": [
"pytorch"
]
},
"outputs": [],
"source": [
"def corr2d_multi_in_out(X, K):\n",
" # 迭代“K”的第0个维度,每次都对输入“X”执行互相关运算。\n",
" # 最后将所有结果都叠加在一起\n",
" return torch.stack([corr2d_multi_in(X, k) for k in K], 0)"
]
},
{
"cell_type": "markdown",
"id": "f5677efa",
"metadata": {
"origin_pos": 10
},
"source": [
"通过将核张量`K`与`K+1``K`中每个元素加$1$)和`K+2`连接起来,构造了一个具有$3$个输出通道的卷积核。\n"
]
},
{
"cell_type": "code",
"execution_count": 5,
"id": "6dde7543",
"metadata": {
"execution": {
"iopub.execute_input": "2023-08-18T07:02:38.392733Z",
"iopub.status.busy": "2023-08-18T07:02:38.392298Z",
"iopub.status.idle": "2023-08-18T07:02:38.399310Z",
"shell.execute_reply": "2023-08-18T07:02:38.398211Z"
},
"origin_pos": 11,
"tab": [
"pytorch"
]
},
"outputs": [
{
"data": {
"text/plain": [
"torch.Size([3, 2, 2, 2])"
]
},
"execution_count": 5,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"K = torch.stack((K, K + 1, K + 2), 0)\n",
"K.shape"
]
},
{
"cell_type": "markdown",
"id": "c7e08b44",
"metadata": {
"origin_pos": 12
},
"source": [
"下面,我们对输入张量`X`与卷积核张量`K`执行互相关运算。现在的输出包含$3$个通道,第一个通道的结果与先前输入张量`X`和多输入单输出通道的结果一致。\n"
]
},
{
"cell_type": "code",
"execution_count": 6,
"id": "86b2b71f",
"metadata": {
"execution": {
"iopub.execute_input": "2023-08-18T07:02:38.403159Z",
"iopub.status.busy": "2023-08-18T07:02:38.402457Z",
"iopub.status.idle": "2023-08-18T07:02:38.410409Z",
"shell.execute_reply": "2023-08-18T07:02:38.409310Z"
},
"origin_pos": 13,
"tab": [
"pytorch"
]
},
"outputs": [
{
"data": {
"text/plain": [
"tensor([[[ 56., 72.],\n",
" [104., 120.]],\n",
"\n",
" [[ 76., 100.],\n",
" [148., 172.]],\n",
"\n",
" [[ 96., 128.],\n",
" [192., 224.]]])"
]
},
"execution_count": 6,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"corr2d_multi_in_out(X, K)"
]
},
{
"cell_type": "markdown",
"id": "285e9413",
"metadata": {
"origin_pos": 14
},
"source": [
"## $1\\times 1$ 卷积层\n",
"\n",
"[~~1x1卷积~~]\n",
"\n",
"$1 \\times 1$卷积,即$k_h = k_w = 1$,看起来似乎没有多大意义。\n",
"毕竟,卷积的本质是有效提取相邻像素间的相关特征,而$1 \\times 1$卷积显然没有此作用。\n",
"尽管如此,$1 \\times 1$仍然十分流行,经常包含在复杂深层网络的设计中。下面,让我们详细地解读一下它的实际作用。\n",
"\n",
"因为使用了最小窗口,$1\\times 1$卷积失去了卷积层的特有能力——在高度和宽度维度上,识别相邻元素间相互作用的能力。\n",
"其实$1\\times 1$卷积的唯一计算发生在通道上。\n",
"\n",
" :numref:`fig_conv_1x1`展示了使用$1\\times 1$卷积核与$3$个输入通道和$2$个输出通道的互相关计算。\n",
"这里输入和输出具有相同的高度和宽度,输出中的每个元素都是从输入图像中同一位置的元素的线性组合。\n",
"我们可以将$1\\times 1$卷积层看作在每个像素位置应用的全连接层,以$c_i$个输入值转换为$c_o$个输出值。\n",
"因为这仍然是一个卷积层,所以跨像素的权重是一致的。\n",
"同时,$1\\times 1$卷积层需要的权重维度为$c_o\\times c_i$,再额外加上一个偏置。\n",
"\n",
"![互相关计算使用了具有3个输入通道和2个输出通道的 $1\\times 1$ 卷积核。其中,输入和输出具有相同的高度和宽度。](../img/conv-1x1.svg)\n",
":label:`fig_conv_1x1`\n",
"\n",
"下面,我们使用全连接层实现$1 \\times 1$卷积。\n",
"请注意,我们需要对输入和输出的数据形状进行调整。\n"
]
},
{
"cell_type": "code",
"execution_count": 7,
"id": "f5be69b4",
"metadata": {
"execution": {
"iopub.execute_input": "2023-08-18T07:02:38.413874Z",
"iopub.status.busy": "2023-08-18T07:02:38.413425Z",
"iopub.status.idle": "2023-08-18T07:02:38.419141Z",
"shell.execute_reply": "2023-08-18T07:02:38.418037Z"
},
"origin_pos": 15,
"tab": [
"pytorch"
]
},
"outputs": [],
"source": [
"def corr2d_multi_in_out_1x1(X, K):\n",
" c_i, h, w = X.shape\n",
" c_o = K.shape[0]\n",
" X = X.reshape((c_i, h * w))\n",
" K = K.reshape((c_o, c_i))\n",
" # 全连接层中的矩阵乘法\n",
" Y = torch.matmul(K, X)\n",
" return Y.reshape((c_o, h, w))"
]
},
{
"cell_type": "markdown",
"id": "0685d9f1",
"metadata": {
"origin_pos": 16
},
"source": [
"当执行$1\\times 1$卷积运算时,上述函数相当于先前实现的互相关函数`corr2d_multi_in_out`。让我们用一些样本数据来验证这一点。\n"
]
},
{
"cell_type": "code",
"execution_count": 8,
"id": "420f0d54",
"metadata": {
"execution": {
"iopub.execute_input": "2023-08-18T07:02:38.422499Z",
"iopub.status.busy": "2023-08-18T07:02:38.422070Z",
"iopub.status.idle": "2023-08-18T07:02:38.427214Z",
"shell.execute_reply": "2023-08-18T07:02:38.426115Z"
},
"origin_pos": 17,
"tab": [
"pytorch"
]
},
"outputs": [],
"source": [
"X = torch.normal(0, 1, (3, 3, 3))\n",
"K = torch.normal(0, 1, (2, 3, 1, 1))"
]
},
{
"cell_type": "code",
"execution_count": 9,
"id": "7250eae2",
"metadata": {
"execution": {
"iopub.execute_input": "2023-08-18T07:02:38.430613Z",
"iopub.status.busy": "2023-08-18T07:02:38.430184Z",
"iopub.status.idle": "2023-08-18T07:02:38.438715Z",
"shell.execute_reply": "2023-08-18T07:02:38.437662Z"
},
"origin_pos": 19,
"tab": [
"pytorch"
]
},
"outputs": [],
"source": [
"Y1 = corr2d_multi_in_out_1x1(X, K)\n",
"Y2 = corr2d_multi_in_out(X, K)\n",
"assert float(torch.abs(Y1 - Y2).sum()) < 1e-6"
]
},
{
"cell_type": "markdown",
"id": "8ba378bd",
"metadata": {
"origin_pos": 20
},
"source": [
"## 小结\n",
"\n",
"* 多输入多输出通道可以用来扩展卷积层的模型。\n",
"* 当以每像素为基础应用时,$1\\times 1$卷积层相当于全连接层。\n",
"* $1\\times 1$卷积层通常用于调整网络层的通道数量和控制模型复杂性。\n",
"\n",
"## 练习\n",
"\n",
"1. 假设我们有两个卷积核,大小分别为$k_1$和$k_2$(中间没有非线性激活函数)。\n",
" 1. 证明运算可以用单次卷积来表示。\n",
" 1. 这个等效的单个卷积核的维数是多少呢?\n",
" 1. 反之亦然吗?\n",
"1. 假设输入为$c_i\\times h\\times w$,卷积核大小为$c_o\\times c_i\\times k_h\\times k_w$,填充为$(p_h, p_w)$,步幅为$(s_h, s_w)$。\n",
" 1. 前向传播的计算成本(乘法和加法)是多少?\n",
" 1. 内存占用是多少?\n",
" 1. 反向传播的内存占用是多少?\n",
" 1. 反向传播的计算成本是多少?\n",
"1. 如果我们将输入通道$c_i$和输出通道$c_o$的数量加倍,计算数量会增加多少?如果我们把填充数量翻一番会怎么样?\n",
"1. 如果卷积核的高度和宽度是$k_h=k_w=1$,前向传播的计算复杂度是多少?\n",
"1. 本节最后一个示例中的变量`Y1`和`Y2`是否完全相同?为什么?\n",
"1. 当卷积窗口不是$1\\times 1$时,如何使用矩阵乘法实现卷积?\n"
]
},
{
"cell_type": "markdown",
"id": "0167237f",
"metadata": {
"origin_pos": 22,
"tab": [
"pytorch"
]
},
"source": [
"[Discussions](https://discuss.d2l.ai/t/1854)\n"
]
}
],
"metadata": {
"language_info": {
"name": "python"
},
"required_libs": []
},
"nbformat": 4,
"nbformat_minor": 5
}
@@ -0,0 +1,557 @@
{
"cells": [
{
"cell_type": "markdown",
"id": "7d2e90ba",
"metadata": {
"origin_pos": 0
},
"source": [
"# 图像卷积\n",
":label:`sec_conv_layer`\n",
"\n",
"上节我们解析了卷积层的原理,现在我们看看它的实际应用。由于卷积神经网络的设计是用于探索图像数据,本节我们将以图像为例。\n",
"\n",
"## 互相关运算\n",
"\n",
"严格来说,卷积层是个错误的叫法,因为它所表达的运算其实是*互相关运算*(cross-correlation),而不是卷积运算。\n",
"根据 :numref:`sec_why-conv`中的描述,在卷积层中,输入张量和核张量通过(**互相关运算**)产生输出张量。\n",
"\n",
"首先,我们暂时忽略通道(第三维)这一情况,看看如何处理二维图像数据和隐藏表示。在 :numref:`fig_correlation`中,输入是高度为$3$、宽度为$3$的二维张量(即形状为$3 \\times 3$)。卷积核的高度和宽度都是$2$,而卷积核窗口(或卷积窗口)的形状由内核的高度和宽度决定(即$2 \\times 2$)。\n",
"\n",
"![二维互相关运算。阴影部分是第一个输出元素,以及用于计算输出的输入张量元素和核张量元素:$0\\times0+1\\times1+3\\times2+4\\times3=19$.](../img/correlation.svg)\n",
":label:`fig_correlation`\n",
"\n",
"在二维互相关运算中,卷积窗口从输入张量的左上角开始,从左到右、从上到下滑动。\n",
"当卷积窗口滑动到新一个位置时,包含在该窗口中的部分张量与卷积核张量进行按元素相乘,得到的张量再求和得到一个单一的标量值,由此我们得出了这一位置的输出张量值。\n",
"在如上例子中,输出张量的四个元素由二维互相关运算得到,这个输出高度为$2$、宽度为$2$,如下所示:\n",
"\n",
"$$\n",
"0\\times0+1\\times1+3\\times2+4\\times3=19,\\\\\n",
"1\\times0+2\\times1+4\\times2+5\\times3=25,\\\\\n",
"3\\times0+4\\times1+6\\times2+7\\times3=37,\\\\\n",
"4\\times0+5\\times1+7\\times2+8\\times3=43.\n",
"$$\n",
"\n",
"注意,输出大小略小于输入大小。这是因为卷积核的宽度和高度大于1,\n",
"而卷积核只与图像中每个大小完全适合的位置进行互相关运算。\n",
"所以,输出大小等于输入大小$n_h \\times n_w$减去卷积核大小$k_h \\times k_w$,即:\n",
"\n",
"$$(n_h-k_h+1) \\times (n_w-k_w+1).$$\n",
"\n",
"这是因为我们需要足够的空间在图像上“移动”卷积核。稍后,我们将看到如何通过在图像边界周围填充零来保证有足够的空间移动卷积核,从而保持输出大小不变。\n",
"接下来,我们在`corr2d`函数中实现如上过程,该函数接受输入张量`X`和卷积核张量`K`,并返回输出张量`Y`。\n"
]
},
{
"cell_type": "code",
"execution_count": 1,
"id": "1bd2b0f5",
"metadata": {
"execution": {
"iopub.execute_input": "2023-08-18T07:07:26.587988Z",
"iopub.status.busy": "2023-08-18T07:07:26.587419Z",
"iopub.status.idle": "2023-08-18T07:07:28.559553Z",
"shell.execute_reply": "2023-08-18T07:07:28.558681Z"
},
"origin_pos": 2,
"tab": [
"pytorch"
]
},
"outputs": [],
"source": [
"import torch\n",
"from torch import nn\n",
"from d2l import torch as d2l"
]
},
{
"cell_type": "code",
"execution_count": 2,
"id": "16abe7ca",
"metadata": {
"execution": {
"iopub.execute_input": "2023-08-18T07:07:28.563668Z",
"iopub.status.busy": "2023-08-18T07:07:28.562986Z",
"iopub.status.idle": "2023-08-18T07:07:28.569424Z",
"shell.execute_reply": "2023-08-18T07:07:28.568319Z"
},
"origin_pos": 4,
"tab": [
"pytorch"
]
},
"outputs": [],
"source": [
"def corr2d(X, K): #@save\n",
" \"\"\"计算二维互相关运算\"\"\"\n",
" h, w = K.shape\n",
" Y = torch.zeros((X.shape[0] - h + 1, X.shape[1] - w + 1))\n",
" for i in range(Y.shape[0]):\n",
" for j in range(Y.shape[1]):\n",
" Y[i, j] = (X[i:i + h, j:j + w] * K).sum()\n",
" return Y"
]
},
{
"cell_type": "markdown",
"id": "e2adaedd",
"metadata": {
"origin_pos": 6
},
"source": [
"通过 :numref:`fig_correlation`的输入张量`X`和卷积核张量`K`,我们来[**验证上述二维互相关运算的输出**]。\n"
]
},
{
"cell_type": "code",
"execution_count": 3,
"id": "6f84e512",
"metadata": {
"execution": {
"iopub.execute_input": "2023-08-18T07:07:28.572958Z",
"iopub.status.busy": "2023-08-18T07:07:28.572449Z",
"iopub.status.idle": "2023-08-18T07:07:28.604854Z",
"shell.execute_reply": "2023-08-18T07:07:28.603813Z"
},
"origin_pos": 7,
"tab": [
"pytorch"
]
},
"outputs": [
{
"data": {
"text/plain": [
"tensor([[19., 25.],\n",
" [37., 43.]])"
]
},
"execution_count": 3,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"X = torch.tensor([[0.0, 1.0, 2.0], [3.0, 4.0, 5.0], [6.0, 7.0, 8.0]])\n",
"K = torch.tensor([[0.0, 1.0], [2.0, 3.0]])\n",
"corr2d(X, K)"
]
},
{
"cell_type": "markdown",
"id": "e93ccf40",
"metadata": {
"origin_pos": 8
},
"source": [
"## 卷积层\n",
"\n",
"卷积层对输入和卷积核权重进行互相关运算,并在添加标量偏置之后产生输出。\n",
"所以,卷积层中的两个被训练的参数是卷积核权重和标量偏置。\n",
"就像我们之前随机初始化全连接层一样,在训练基于卷积层的模型时,我们也随机初始化卷积核权重。\n",
"\n",
"基于上面定义的`corr2d`函数[**实现二维卷积层**]。在`__init__`构造函数中,将`weight`和`bias`声明为两个模型参数。前向传播函数调用`corr2d`函数并添加偏置。\n"
]
},
{
"cell_type": "code",
"execution_count": 4,
"id": "450def67",
"metadata": {
"execution": {
"iopub.execute_input": "2023-08-18T07:07:28.610672Z",
"iopub.status.busy": "2023-08-18T07:07:28.609819Z",
"iopub.status.idle": "2023-08-18T07:07:28.615602Z",
"shell.execute_reply": "2023-08-18T07:07:28.614632Z"
},
"origin_pos": 10,
"tab": [
"pytorch"
]
},
"outputs": [],
"source": [
"class Conv2D(nn.Module):\n",
" def __init__(self, kernel_size):\n",
" super().__init__()\n",
" self.weight = nn.Parameter(torch.rand(kernel_size))\n",
" self.bias = nn.Parameter(torch.zeros(1))\n",
"\n",
" def forward(self, x):\n",
" return corr2d(x, self.weight) + self.bias"
]
},
{
"cell_type": "markdown",
"id": "d361e4c7",
"metadata": {
"origin_pos": 13
},
"source": [
"高度和宽度分别为$h$和$w$的卷积核可以被称为$h \\times w$卷积或$h \\times w$卷积核。\n",
"我们也将带有$h \\times w$卷积核的卷积层称为$h \\times w$卷积层。\n",
"\n",
"## 图像中目标的边缘检测\n",
"\n",
"如下是[**卷积层的一个简单应用:**]通过找到像素变化的位置,来(**检测图像中不同颜色的边缘**)。\n",
"首先,我们构造一个$6\\times 8$像素的黑白图像。中间四列为黑色($0$),其余像素为白色($1$)。\n"
]
},
{
"cell_type": "code",
"execution_count": 5,
"id": "dee1bc79",
"metadata": {
"execution": {
"iopub.execute_input": "2023-08-18T07:07:28.620077Z",
"iopub.status.busy": "2023-08-18T07:07:28.619277Z",
"iopub.status.idle": "2023-08-18T07:07:28.626719Z",
"shell.execute_reply": "2023-08-18T07:07:28.625746Z"
},
"origin_pos": 14,
"tab": [
"pytorch"
]
},
"outputs": [
{
"data": {
"text/plain": [
"tensor([[1., 1., 0., 0., 0., 0., 1., 1.],\n",
" [1., 1., 0., 0., 0., 0., 1., 1.],\n",
" [1., 1., 0., 0., 0., 0., 1., 1.],\n",
" [1., 1., 0., 0., 0., 0., 1., 1.],\n",
" [1., 1., 0., 0., 0., 0., 1., 1.],\n",
" [1., 1., 0., 0., 0., 0., 1., 1.]])"
]
},
"execution_count": 5,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"X = torch.ones((6, 8))\n",
"X[:, 2:6] = 0\n",
"X"
]
},
{
"cell_type": "markdown",
"id": "ea455932",
"metadata": {
"origin_pos": 16
},
"source": [
"接下来,我们构造一个高度为$1$、宽度为$2$的卷积核`K`。当进行互相关运算时,如果水平相邻的两元素相同,则输出为零,否则输出为非零。\n"
]
},
{
"cell_type": "code",
"execution_count": 6,
"id": "d042bda0",
"metadata": {
"execution": {
"iopub.execute_input": "2023-08-18T07:07:28.630101Z",
"iopub.status.busy": "2023-08-18T07:07:28.629606Z",
"iopub.status.idle": "2023-08-18T07:07:28.634133Z",
"shell.execute_reply": "2023-08-18T07:07:28.633165Z"
},
"origin_pos": 17,
"tab": [
"pytorch"
]
},
"outputs": [],
"source": [
"K = torch.tensor([[1.0, -1.0]])"
]
},
{
"cell_type": "markdown",
"id": "19635ba4",
"metadata": {
"origin_pos": 18
},
"source": [
"现在,我们对参数`X`(输入)和`K`(卷积核)执行互相关运算。\n",
"如下所示,[**输出`Y`中的1代表从白色到黑色的边缘,-1代表从黑色到白色的边缘**],其他情况的输出为$0$。\n"
]
},
{
"cell_type": "code",
"execution_count": 7,
"id": "36de9e2a",
"metadata": {
"execution": {
"iopub.execute_input": "2023-08-18T07:07:28.639056Z",
"iopub.status.busy": "2023-08-18T07:07:28.638505Z",
"iopub.status.idle": "2023-08-18T07:07:28.646532Z",
"shell.execute_reply": "2023-08-18T07:07:28.645509Z"
},
"origin_pos": 19,
"tab": [
"pytorch"
]
},
"outputs": [
{
"data": {
"text/plain": [
"tensor([[ 0., 1., 0., 0., 0., -1., 0.],\n",
" [ 0., 1., 0., 0., 0., -1., 0.],\n",
" [ 0., 1., 0., 0., 0., -1., 0.],\n",
" [ 0., 1., 0., 0., 0., -1., 0.],\n",
" [ 0., 1., 0., 0., 0., -1., 0.],\n",
" [ 0., 1., 0., 0., 0., -1., 0.]])"
]
},
"execution_count": 7,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"Y = corr2d(X, K)\n",
"Y"
]
},
{
"cell_type": "markdown",
"id": "9f3991ae",
"metadata": {
"origin_pos": 20
},
"source": [
"现在我们将输入的二维图像转置,再进行如上的互相关运算。\n",
"其输出如下,之前检测到的垂直边缘消失了。\n",
"不出所料,这个[**卷积核`K`只可以检测垂直边缘**],无法检测水平边缘。\n"
]
},
{
"cell_type": "code",
"execution_count": 8,
"id": "0a754b2d",
"metadata": {
"execution": {
"iopub.execute_input": "2023-08-18T07:07:28.651371Z",
"iopub.status.busy": "2023-08-18T07:07:28.650819Z",
"iopub.status.idle": "2023-08-18T07:07:28.658419Z",
"shell.execute_reply": "2023-08-18T07:07:28.657436Z"
},
"origin_pos": 21,
"tab": [
"pytorch"
]
},
"outputs": [
{
"data": {
"text/plain": [
"tensor([[0., 0., 0., 0., 0.],\n",
" [0., 0., 0., 0., 0.],\n",
" [0., 0., 0., 0., 0.],\n",
" [0., 0., 0., 0., 0.],\n",
" [0., 0., 0., 0., 0.],\n",
" [0., 0., 0., 0., 0.],\n",
" [0., 0., 0., 0., 0.],\n",
" [0., 0., 0., 0., 0.]])"
]
},
"execution_count": 8,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"corr2d(X.t(), K)"
]
},
{
"cell_type": "markdown",
"id": "18ceafe9",
"metadata": {
"origin_pos": 22
},
"source": [
"## 学习卷积核\n",
"\n",
"如果我们只需寻找黑白边缘,那么以上`[1, -1]`的边缘检测器足以。然而,当有了更复杂数值的卷积核,或者连续的卷积层时,我们不可能手动设计滤波器。那么我们是否可以[**学习由`X`生成`Y`的卷积核**]呢?\n",
"\n",
"现在让我们看看是否可以通过仅查看“输入-输出”对来学习由`X`生成`Y`的卷积核。\n",
"我们先构造一个卷积层,并将其卷积核初始化为随机张量。接下来,在每次迭代中,我们比较`Y`与卷积层输出的平方误差,然后计算梯度来更新卷积核。为了简单起见,我们在此使用内置的二维卷积层,并忽略偏置。\n"
]
},
{
"cell_type": "code",
"execution_count": 9,
"id": "2b423578",
"metadata": {
"execution": {
"iopub.execute_input": "2023-08-18T07:07:28.662260Z",
"iopub.status.busy": "2023-08-18T07:07:28.661527Z",
"iopub.status.idle": "2023-08-18T07:07:28.681412Z",
"shell.execute_reply": "2023-08-18T07:07:28.680192Z"
},
"origin_pos": 24,
"tab": [
"pytorch"
]
},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"epoch 2, loss 6.422\n",
"epoch 4, loss 1.225\n",
"epoch 6, loss 0.266\n",
"epoch 8, loss 0.070\n",
"epoch 10, loss 0.022\n"
]
}
],
"source": [
"# 构造一个二维卷积层,它具有1个输出通道和形状为(1,2)的卷积核\n",
"conv2d = nn.Conv2d(1,1, kernel_size=(1, 2), bias=False)\n",
"\n",
"# 这个二维卷积层使用四维输入和输出格式(批量大小、通道、高度、宽度),\n",
"# 其中批量大小和通道数都为1\n",
"X = X.reshape((1, 1, 6, 8))\n",
"Y = Y.reshape((1, 1, 6, 7))\n",
"lr = 3e-2 # 学习率\n",
"\n",
"for i in range(10):\n",
" Y_hat = conv2d(X)\n",
" l = (Y_hat - Y) ** 2\n",
" conv2d.zero_grad()\n",
" l.sum().backward()\n",
" # 迭代卷积核\n",
" conv2d.weight.data[:] -= lr * conv2d.weight.grad\n",
" if (i + 1) % 2 == 0:\n",
" print(f'epoch {i+1}, loss {l.sum():.3f}')"
]
},
{
"cell_type": "markdown",
"id": "37744bcf",
"metadata": {
"origin_pos": 27
},
"source": [
"在$10$次迭代之后,误差已经降到足够低。现在我们来看看我们[**所学的卷积核的权重张量**]。\n"
]
},
{
"cell_type": "code",
"execution_count": 10,
"id": "b40515e8",
"metadata": {
"execution": {
"iopub.execute_input": "2023-08-18T07:07:28.684721Z",
"iopub.status.busy": "2023-08-18T07:07:28.684428Z",
"iopub.status.idle": "2023-08-18T07:07:28.691507Z",
"shell.execute_reply": "2023-08-18T07:07:28.690512Z"
},
"origin_pos": 29,
"tab": [
"pytorch"
]
},
"outputs": [
{
"data": {
"text/plain": [
"tensor([[ 1.0010, -0.9739]])"
]
},
"execution_count": 10,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"conv2d.weight.data.reshape((1, 2))"
]
},
{
"cell_type": "markdown",
"id": "366d2c4f",
"metadata": {
"origin_pos": 32
},
"source": [
"细心的读者一定会发现,我们学习到的卷积核权重非常接近我们之前定义的卷积核`K`。\n",
"\n",
"## 互相关和卷积\n",
"\n",
"回想一下我们在 :numref:`sec_why-conv`中观察到的互相关和卷积运算之间的对应关系。\n",
"为了得到正式的*卷积*运算输出,我们需要执行 :eqref:`eq_2d-conv-discrete`中定义的严格卷积运算,而不是互相关运算。\n",
"幸运的是,它们差别不大,我们只需水平和垂直翻转二维卷积核张量,然后对输入张量执行*互相关*运算。\n",
"\n",
"值得注意的是,由于卷积核是从数据中学习到的,因此无论这些层执行严格的卷积运算还是互相关运算,卷积层的输出都不会受到影响。\n",
"为了说明这一点,假设卷积层执行*互相关*运算并学习 :numref:`fig_correlation`中的卷积核,该卷积核在这里由矩阵$\\mathbf{K}$表示。\n",
"假设其他条件不变,当这个层执行严格的*卷积*时,学习的卷积核$\\mathbf{K}'$在水平和垂直翻转之后将与$\\mathbf{K}$相同。\n",
"也就是说,当卷积层对 :numref:`fig_correlation`中的输入和$\\mathbf{K}'$执行严格*卷积*运算时,将得到与互相关运算 :numref:`fig_correlation`中相同的输出。\n",
"\n",
"为了与深度学习文献中的标准术语保持一致,我们将继续把“互相关运算”称为卷积运算,尽管严格地说,它们略有不同。\n",
"此外,对于卷积核张量上的权重,我们称其为*元素*。\n",
"\n",
"## 特征映射和感受野\n",
"\n",
"如在 :numref:`subsec_why-conv-channels`中所述, :numref:`fig_correlation`中输出的卷积层有时被称为*特征映射*(feature map),因为它可以被视为一个输入映射到下一层的空间维度的转换器。\n",
"在卷积神经网络中,对于某一层的任意元素$x$,其*感受野*(receptive field)是指在前向传播期间可能影响$x$计算的所有元素(来自所有先前层)。\n",
"\n",
"请注意,感受野可能大于输入的实际大小。让我们用 :numref:`fig_correlation`为例来解释感受野:\n",
"给定$2 \\times 2$卷积核,阴影输出元素值$19$的感受野是输入阴影部分的四个元素。\n",
"假设之前输出为$\\mathbf{Y}$,其大小为$2 \\times 2$,现在我们在其后附加一个卷积层,该卷积层以$\\mathbf{Y}$为输入,输出单个元素$z$。\n",
"在这种情况下,$\\mathbf{Y}$上的$z$的感受野包括$\\mathbf{Y}$的所有四个元素,而输入的感受野包括最初所有九个输入元素。\n",
"因此,当一个特征图中的任意元素需要检测更广区域的输入特征时,我们可以构建一个更深的网络。\n",
"\n",
"## 小结\n",
"\n",
"* 二维卷积层的核心计算是二维互相关运算。最简单的形式是,对二维输入数据和卷积核执行互相关操作,然后添加一个偏置。\n",
"* 我们可以设计一个卷积核来检测图像的边缘。\n",
"* 我们可以从数据中学习卷积核的参数。\n",
"* 学习卷积核时,无论用严格卷积运算或互相关运算,卷积层的输出不会受太大影响。\n",
"* 当需要检测输入特征中更广区域时,我们可以构建一个更深的卷积网络。\n",
"\n",
"## 练习\n",
"\n",
"1. 构建一个具有对角线边缘的图像`X`。\n",
" 1. 如果将本节中举例的卷积核`K`应用于`X`,会发生什么情况?\n",
" 1. 如果转置`X`会发生什么?\n",
" 1. 如果转置`K`会发生什么?\n",
"1. 在我们创建的`Conv2D`自动求导时,有什么错误消息?\n",
"1. 如何通过改变输入张量和卷积核张量,将互相关运算表示为矩阵乘法?\n",
"1. 手工设计一些卷积核。\n",
" 1. 二阶导数的核的形式是什么?\n",
" 1. 积分的核的形式是什么?\n",
" 1. 得到$d$次导数的最小核的大小是多少?\n"
]
},
{
"cell_type": "markdown",
"id": "c9adecf6",
"metadata": {
"origin_pos": 34,
"tab": [
"pytorch"
]
},
"source": [
"[Discussions](https://discuss.d2l.ai/t/1848)\n"
]
}
],
"metadata": {
"language_info": {
"name": "python"
},
"required_libs": []
},
"nbformat": 4,
"nbformat_minor": 5
}
@@ -0,0 +1,53 @@
{
"cells": [
{
"cell_type": "markdown",
"id": "5231073b",
"metadata": {
"origin_pos": 0
},
"source": [
"# 卷积神经网络\n",
":label:`chap_cnn`\n",
"\n",
"在前面的章节中,我们遇到过图像数据。\n",
"这种数据的每个样本都由一个二维像素网格组成,\n",
"每个像素可能是一个或者多个数值,取决于是黑白还是彩色图像。\n",
"到目前为止,我们处理这类结构丰富的数据的方式还不够有效。\n",
"我们仅仅通过将图像数据展平成一维向量而忽略了每个图像的空间结构信息,再将数据送入一个全连接的多层感知机中。\n",
"因为这些网络特征元素的顺序是不变的,因此最优的结果是利用先验知识,即利用相近像素之间的相互关联性,从图像数据中学习得到有效的模型。\n",
"\n",
"本章介绍的*卷积神经网络*convolutional neural networkCNN)是一类强大的、为处理图像数据而设计的神经网络。\n",
"基于卷积神经网络架构的模型在计算机视觉领域中已经占主导地位,当今几乎所有的图像识别、目标检测或语义分割相关的学术竞赛和商业应用都以这种方法为基础。\n",
"\n",
"现代卷积神经网络的设计得益于生物学、群论和一系列的补充实验。\n",
"卷积神经网络需要的参数少于全连接架构的网络,而且卷积也很容易用GPU并行计算。\n",
"因此卷积神经网络除了能够高效地采样从而获得精确的模型,还能够高效地计算。\n",
"久而久之,从业人员越来越多地使用卷积神经网络。即使在通常使用循环神经网络的一维序列结构任务上(例如音频、文本和时间序列分析),卷积神经网络也越来越受欢迎。\n",
"通过对卷积神经网络一些巧妙的调整,也使它们在图结构数据和推荐系统中发挥作用。\n",
"\n",
"在本章的开始,我们将介绍构成所有卷积网络主干的基本元素。\n",
"这包括卷积层本身、填充(padding)和步幅(stride)的基本细节、用于在相邻区域汇聚信息的汇聚层(pooling)、在每一层中多通道(channel)的使用,以及有关现代卷积网络架构的仔细讨论。\n",
"在本章的最后,我们将介绍一个完整的、可运行的LeNet模型:这是第一个成功应用的卷积神经网络,比现代深度学习兴起时间还要早。\n",
"在下一章中,我们将深入研究一些流行的、相对较新的卷积神经网络架构的完整实现,这些网络架构涵盖了现代从业者通常使用的大多数经典技术。\n",
"\n",
":begin_tab:toc\n",
" - [why-conv](why-conv.ipynb)\n",
" - [conv-layer](conv-layer.ipynb)\n",
" - [padding-and-strides](padding-and-strides.ipynb)\n",
" - [channels](channels.ipynb)\n",
" - [pooling](pooling.ipynb)\n",
" - [lenet](lenet.ipynb)\n",
":end_tab:\n"
]
}
],
"metadata": {
"language_info": {
"name": "python"
},
"required_libs": []
},
"nbformat": 4,
"nbformat_minor": 5
}
File diff suppressed because it is too large Load Diff
@@ -0,0 +1,299 @@
{
"cells": [
{
"cell_type": "markdown",
"id": "f68fd76a",
"metadata": {
"origin_pos": 0
},
"source": [
"# 填充和步幅\n",
":label:`sec_padding`\n",
"\n",
"在前面的例子 :numref:`fig_correlation`中,输入的高度和宽度都为$3$,卷积核的高度和宽度都为$2$,生成的输出表征的维数为$2\\times2$。\n",
"正如我们在 :numref:`sec_conv_layer`中所概括的那样,假设输入形状为$n_h\\times n_w$,卷积核形状为$k_h\\times k_w$,那么输出形状将是$(n_h-k_h+1) \\times (n_w-k_w+1)$。\n",
"因此,卷积的输出形状取决于输入形状和卷积核的形状。\n",
"\n",
"还有什么因素会影响输出的大小呢?本节我们将介绍*填充*(padding)和*步幅*stride)。假设以下情景:\n",
"有时,在应用了连续的卷积之后,我们最终得到的输出远小于输入大小。这是由于卷积核的宽度和高度通常大于$1$所导致的。比如,一个$240 \\times 240$像素的图像,经过$10$层$5 \\times 5$的卷积后,将减少到$200 \\times 200$像素。如此一来,原始图像的边界丢失了许多有用信息。而*填充*是解决此问题最有效的方法;\n",
"有时,我们可能希望大幅降低图像的宽度和高度。例如,如果我们发现原始的输入分辨率十分冗余。*步幅*则可以在这类情况下提供帮助。\n",
"\n",
"## 填充\n",
"\n",
"如上所述,在应用多层卷积时,我们常常丢失边缘像素。\n",
"由于我们通常使用小卷积核,因此对于任何单个卷积,我们可能只会丢失几个像素。\n",
"但随着我们应用许多连续卷积层,累积丢失的像素数就多了。\n",
"解决这个问题的简单方法即为*填充*(padding):在输入图像的边界填充元素(通常填充元素是$0$)。\n",
"例如,在 :numref:`img_conv_pad`中,我们将$3 \\times 3$输入填充到$5 \\times 5$,那么它的输出就增加为$4 \\times 4$。阴影部分是第一个输出元素以及用于输出计算的输入和核张量元素:\n",
"$0\\times0+0\\times1+0\\times2+0\\times3=0$。\n",
"\n",
"![带填充的二维互相关。](../img/conv-pad.svg)\n",
":label:`img_conv_pad`\n",
"\n",
"通常,如果我们添加$p_h$行填充(大约一半在顶部,一半在底部)和$p_w$列填充(左侧大约一半,右侧一半),则输出形状将为\n",
"\n",
"$$(n_h-k_h+p_h+1)\\times(n_w-k_w+p_w+1)。$$\n",
"\n",
"这意味着输出的高度和宽度将分别增加$p_h$和$p_w$。\n",
"\n",
"在许多情况下,我们需要设置$p_h=k_h-1$和$p_w=k_w-1$,使输入和输出具有相同的高度和宽度。\n",
"这样可以在构建网络时更容易地预测每个图层的输出形状。假设$k_h$是奇数,我们将在高度的两侧填充$p_h/2$行。\n",
"如果$k_h$是偶数,则一种可能性是在输入顶部填充$\\lceil p_h/2\\rceil$行,在底部填充$\\lfloor p_h/2\\rfloor$行。同理,我们填充宽度的两侧。\n",
"\n",
"卷积神经网络中卷积核的高度和宽度通常为奇数,例如1、3、5或7。\n",
"选择奇数的好处是,保持空间维度的同时,我们可以在顶部和底部填充相同数量的行,在左侧和右侧填充相同数量的列。\n",
"\n",
"此外,使用奇数的核大小和填充大小也提供了书写上的便利。对于任何二维张量`X`,当满足:\n",
"1. 卷积核的大小是奇数;\n",
"2. 所有边的填充行数和列数相同;\n",
"3. 输出与输入具有相同高度和宽度\n",
"则可以得出:输出`Y[i, j]`是通过以输入`X[i, j]`为中心,与卷积核进行互相关计算得到的。\n",
"\n",
"比如,在下面的例子中,我们创建一个高度和宽度为3的二维卷积层,并(**在所有侧边填充1个像素**)。给定高度和宽度为8的输入,则输出的高度和宽度也是8。\n"
]
},
{
"cell_type": "code",
"execution_count": 1,
"id": "ee25ca28",
"metadata": {
"execution": {
"iopub.execute_input": "2023-08-18T07:00:27.440657Z",
"iopub.status.busy": "2023-08-18T07:00:27.439788Z",
"iopub.status.idle": "2023-08-18T07:00:28.396461Z",
"shell.execute_reply": "2023-08-18T07:00:28.395508Z"
},
"origin_pos": 2,
"tab": [
"pytorch"
]
},
"outputs": [
{
"data": {
"text/plain": [
"torch.Size([8, 8])"
]
},
"execution_count": 1,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"import torch\n",
"from torch import nn\n",
"\n",
"\n",
"# 为了方便起见,我们定义了一个计算卷积层的函数。\n",
"# 此函数初始化卷积层权重,并对输入和输出提高和缩减相应的维数\n",
"def comp_conv2d(conv2d, X):\n",
" # 这里的(1,1)表示批量大小和通道数都是1\n",
" X = X.reshape((1, 1) + X.shape)\n",
" Y = conv2d(X)\n",
" # 省略前两个维度:批量大小和通道\n",
" return Y.reshape(Y.shape[2:])\n",
"\n",
"# 请注意,这里每边都填充了1行或1列,因此总共添加了2行或2列\n",
"conv2d = nn.Conv2d(1, 1, kernel_size=3, padding=1)\n",
"X = torch.rand(size=(8, 8))\n",
"comp_conv2d(conv2d, X).shape"
]
},
{
"cell_type": "markdown",
"id": "f46e5ea5",
"metadata": {
"origin_pos": 5
},
"source": [
"当卷积核的高度和宽度不同时,我们可以[**填充不同的高度和宽度**],使输出和输入具有相同的高度和宽度。在如下示例中,我们使用高度为5,宽度为3的卷积核,高度和宽度两边的填充分别为2和1。\n"
]
},
{
"cell_type": "code",
"execution_count": 2,
"id": "5dadebb1",
"metadata": {
"execution": {
"iopub.execute_input": "2023-08-18T07:00:28.400923Z",
"iopub.status.busy": "2023-08-18T07:00:28.400085Z",
"iopub.status.idle": "2023-08-18T07:00:28.406887Z",
"shell.execute_reply": "2023-08-18T07:00:28.406085Z"
},
"origin_pos": 7,
"tab": [
"pytorch"
]
},
"outputs": [
{
"data": {
"text/plain": [
"torch.Size([8, 8])"
]
},
"execution_count": 2,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"conv2d = nn.Conv2d(1, 1, kernel_size=(5, 3), padding=(2, 1))\n",
"comp_conv2d(conv2d, X).shape"
]
},
{
"cell_type": "markdown",
"id": "5a303f4b",
"metadata": {
"origin_pos": 10
},
"source": [
"## 步幅\n",
"\n",
"在计算互相关时,卷积窗口从输入张量的左上角开始,向下、向右滑动。\n",
"在前面的例子中,我们默认每次滑动一个元素。\n",
"但是,有时候为了高效计算或是缩减采样次数,卷积窗口可以跳过中间位置,每次滑动多个元素。\n",
"\n",
"我们将每次滑动元素的数量称为*步幅*(stride)。到目前为止,我们只使用过高度或宽度为$1$的步幅,那么如何使用较大的步幅呢?\n",
" :numref:`img_conv_stride`是垂直步幅为$3$,水平步幅为$2$的二维互相关运算。\n",
"着色部分是输出元素以及用于输出计算的输入和内核张量元素:$0\\times0+0\\times1+1\\times2+2\\times3=8$、$0\\times0+6\\times1+0\\times2+0\\times3=6$。\n",
"\n",
"可以看到,为了计算输出中第一列的第二个元素和第一行的第二个元素,卷积窗口分别向下滑动三行和向右滑动两列。但是,当卷积窗口继续向右滑动两列时,没有输出,因为输入元素无法填充窗口(除非我们添加另一列填充)。\n",
"\n",
"![垂直步幅为 $3$,水平步幅为 $2$ 的二维互相关运算。](../img/conv-stride.svg)\n",
":label:`img_conv_stride`\n",
"\n",
"通常,当垂直步幅为$s_h$、水平步幅为$s_w$时,输出形状为\n",
"\n",
"$$\\lfloor(n_h-k_h+p_h+s_h)/s_h\\rfloor \\times \\lfloor(n_w-k_w+p_w+s_w)/s_w\\rfloor.$$\n",
"\n",
"如果我们设置了$p_h=k_h-1$和$p_w=k_w-1$,则输出形状将简化为$\\lfloor(n_h+s_h-1)/s_h\\rfloor \\times \\lfloor(n_w+s_w-1)/s_w\\rfloor$。\n",
"更进一步,如果输入的高度和宽度可以被垂直和水平步幅整除,则输出形状将为$(n_h/s_h) \\times (n_w/s_w)$。\n",
"\n",
"下面,我们[**将高度和宽度的步幅设置为2**],从而将输入的高度和宽度减半。\n"
]
},
{
"cell_type": "code",
"execution_count": 3,
"id": "7b6ac278",
"metadata": {
"execution": {
"iopub.execute_input": "2023-08-18T07:00:28.410395Z",
"iopub.status.busy": "2023-08-18T07:00:28.410090Z",
"iopub.status.idle": "2023-08-18T07:00:28.416621Z",
"shell.execute_reply": "2023-08-18T07:00:28.415848Z"
},
"origin_pos": 12,
"tab": [
"pytorch"
]
},
"outputs": [
{
"data": {
"text/plain": [
"torch.Size([4, 4])"
]
},
"execution_count": 3,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"conv2d = nn.Conv2d(1, 1, kernel_size=3, padding=1, stride=2)\n",
"comp_conv2d(conv2d, X).shape"
]
},
{
"cell_type": "markdown",
"id": "e9e254ec",
"metadata": {
"origin_pos": 15
},
"source": [
"接下来,看(**一个稍微复杂的例子**)。\n"
]
},
{
"cell_type": "code",
"execution_count": 4,
"id": "6f1c0e6c",
"metadata": {
"execution": {
"iopub.execute_input": "2023-08-18T07:00:28.422070Z",
"iopub.status.busy": "2023-08-18T07:00:28.421461Z",
"iopub.status.idle": "2023-08-18T07:00:28.429200Z",
"shell.execute_reply": "2023-08-18T07:00:28.427969Z"
},
"origin_pos": 17,
"tab": [
"pytorch"
]
},
"outputs": [
{
"data": {
"text/plain": [
"torch.Size([2, 2])"
]
},
"execution_count": 4,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"conv2d = nn.Conv2d(1, 1, kernel_size=(3, 5), padding=(0, 1), stride=(3, 4))\n",
"comp_conv2d(conv2d, X).shape"
]
},
{
"cell_type": "markdown",
"id": "4674c8d4",
"metadata": {
"origin_pos": 20
},
"source": [
"为了简洁起见,当输入高度和宽度两侧的填充数量分别为$p_h$和$p_w$时,我们称之为填充$(p_h, p_w)$。当$p_h = p_w = p$时,填充是$p$。同理,当高度和宽度上的步幅分别为$s_h$和$s_w$时,我们称之为步幅$(s_h, s_w)$。特别地,当$s_h = s_w = s$时,我们称步幅为$s$。默认情况下,填充为0,步幅为1。在实践中,我们很少使用不一致的步幅或填充,也就是说,我们通常有$p_h = p_w$和$s_h = s_w$。\n",
"\n",
"## 小结\n",
"\n",
"* 填充可以增加输出的高度和宽度。这常用来使输出与输入具有相同的高和宽。\n",
"* 步幅可以减小输出的高和宽,例如输出的高和宽仅为输入的高和宽的$1/n$($n$是一个大于$1$的整数)。\n",
"* 填充和步幅可用于有效地调整数据的维度。\n",
"\n",
"## 练习\n",
"\n",
"1. 对于本节中的最后一个示例,计算其输出形状,以查看它是否与实验结果一致。\n",
"1. 在本节中的实验中,试一试其他填充和步幅组合。\n",
"1. 对于音频信号,步幅$2$说明什么?\n",
"1. 步幅大于$1$的计算优势是什么?\n"
]
},
{
"cell_type": "markdown",
"id": "a93cbfa0",
"metadata": {
"origin_pos": 22,
"tab": [
"pytorch"
]
},
"source": [
"[Discussions](https://discuss.d2l.ai/t/1851)\n"
]
}
],
"metadata": {
"language_info": {
"name": "python"
},
"required_libs": []
},
"nbformat": 4,
"nbformat_minor": 5
}
@@ -0,0 +1,527 @@
{
"cells": [
{
"cell_type": "markdown",
"id": "3406a2db",
"metadata": {
"origin_pos": 0
},
"source": [
"# 汇聚层\n",
":label:`sec_pooling`\n",
"\n",
"通常当我们处理图像时,我们希望逐渐降低隐藏表示的空间分辨率、聚集信息,这样随着我们在神经网络中层叠的上升,每个神经元对其敏感的感受野(输入)就越大。\n",
"\n",
"而我们的机器学习任务通常会跟全局图像的问题有关(例如,“图像是否包含一只猫呢?”),所以我们最后一层的神经元应该对整个输入的全局敏感。通过逐渐聚合信息,生成越来越粗糙的映射,最终实现学习全局表示的目标,同时将卷积图层的所有优势保留在中间层。\n",
"\n",
"此外,当检测较底层的特征时(例如 :numref:`sec_conv_layer`中所讨论的边缘),我们通常希望这些特征保持某种程度上的平移不变性。例如,如果我们拍摄黑白之间轮廓清晰的图像`X`,并将整个图像向右移动一个像素,即`Z[i, j] = X[i, j + 1]`,则新图像`Z`的输出可能大不相同。而在现实中,随着拍摄角度的移动,任何物体几乎不可能发生在同一像素上。即使用三脚架拍摄一个静止的物体,由于快门的移动而引起的相机振动,可能会使所有物体左右移动一个像素(除了高端相机配备了特殊功能来解决这个问题)。\n",
"\n",
"本节将介绍*汇聚*(pooling)层,它具有双重目的:降低卷积层对位置的敏感性,同时降低对空间降采样表示的敏感性。\n",
"\n",
"## 最大汇聚层和平均汇聚层\n",
"\n",
"与卷积层类似,汇聚层运算符由一个固定形状的窗口组成,该窗口根据其步幅大小在输入的所有区域上滑动,为固定形状窗口(有时称为*汇聚窗口*)遍历的每个位置计算一个输出。\n",
"然而,不同于卷积层中的输入与卷积核之间的互相关计算,汇聚层不包含参数。\n",
"相反,池运算是确定性的,我们通常计算汇聚窗口中所有元素的最大值或平均值。这些操作分别称为*最大汇聚层*(maximum pooling)和*平均汇聚层*average pooling)。\n",
"\n",
"在这两种情况下,与互相关运算符一样,汇聚窗口从输入张量的左上角开始,从左往右、从上往下的在输入张量内滑动。在汇聚窗口到达的每个位置,它计算该窗口中输入子张量的最大值或平均值。计算最大值或平均值是取决于使用了最大汇聚层还是平均汇聚层。\n",
"\n",
"![汇聚窗口形状为 $2\\times 2$ 的最大汇聚层。着色部分是第一个输出元素,以及用于计算这个输出的输入元素: $\\max(0, 1, 3, 4)=4$.](../img/pooling.svg)\n",
":label:`fig_pooling`\n",
"\n",
" :numref:`fig_pooling`中的输出张量的高度为$2$,宽度为$2$。这四个元素为每个汇聚窗口中的最大值:\n",
"\n",
"$$\n",
"\\max(0, 1, 3, 4)=4,\\\\\n",
"\\max(1, 2, 4, 5)=5,\\\\\n",
"\\max(3, 4, 6, 7)=7,\\\\\n",
"\\max(4, 5, 7, 8)=8.\\\\\n",
"$$\n",
"\n",
"汇聚窗口形状为$p \\times q$的汇聚层称为$p \\times q$汇聚层,汇聚操作称为$p \\times q$汇聚。\n",
"\n",
"回到本节开头提到的对象边缘检测示例,现在我们将使用卷积层的输出作为$2\\times 2$最大汇聚的输入。\n",
"设置卷积层输入为`X`,汇聚层输出为`Y`。\n",
"无论`X[i, j]`和`X[i, j + 1]`的值相同与否,或`X[i, j + 1]`和`X[i, j + 2]`的值相同与否,汇聚层始终输出`Y[i, j] = 1`。\n",
"也就是说,使用$2\\times 2$最大汇聚层,即使在高度或宽度上移动一个元素,卷积层仍然可以识别到模式。\n",
"\n",
"在下面的代码中的`pool2d`函数,我们(**实现汇聚层的前向传播**)。\n",
"这类似于 :numref:`sec_conv_layer`中的`corr2d`函数。\n",
"然而,这里我们没有卷积核,输出为输入中每个区域的最大值或平均值。\n"
]
},
{
"cell_type": "code",
"execution_count": 1,
"id": "292e979e",
"metadata": {
"execution": {
"iopub.execute_input": "2023-08-18T07:02:18.192662Z",
"iopub.status.busy": "2023-08-18T07:02:18.191844Z",
"iopub.status.idle": "2023-08-18T07:02:20.224371Z",
"shell.execute_reply": "2023-08-18T07:02:20.223413Z"
},
"origin_pos": 2,
"tab": [
"pytorch"
]
},
"outputs": [],
"source": [
"import torch\n",
"from torch import nn\n",
"from d2l import torch as d2l"
]
},
{
"cell_type": "code",
"execution_count": 2,
"id": "fe35adac",
"metadata": {
"execution": {
"iopub.execute_input": "2023-08-18T07:02:20.228639Z",
"iopub.status.busy": "2023-08-18T07:02:20.227964Z",
"iopub.status.idle": "2023-08-18T07:02:20.234155Z",
"shell.execute_reply": "2023-08-18T07:02:20.233266Z"
},
"origin_pos": 4,
"tab": [
"pytorch"
]
},
"outputs": [],
"source": [
"def pool2d(X, pool_size, mode='max'):\n",
" p_h, p_w = pool_size\n",
" Y = torch.zeros((X.shape[0] - p_h + 1, X.shape[1] - p_w + 1))\n",
" for i in range(Y.shape[0]):\n",
" for j in range(Y.shape[1]):\n",
" if mode == 'max':\n",
" Y[i, j] = X[i: i + p_h, j: j + p_w].max()\n",
" elif mode == 'avg':\n",
" Y[i, j] = X[i: i + p_h, j: j + p_w].mean()\n",
" return Y"
]
},
{
"cell_type": "markdown",
"id": "27b51b5e",
"metadata": {
"origin_pos": 6
},
"source": [
"我们可以构建 :numref:`fig_pooling`中的输入张量`X`,[**验证二维最大汇聚层的输出**]。\n"
]
},
{
"cell_type": "code",
"execution_count": 3,
"id": "3a781c85",
"metadata": {
"execution": {
"iopub.execute_input": "2023-08-18T07:02:20.237767Z",
"iopub.status.busy": "2023-08-18T07:02:20.237211Z",
"iopub.status.idle": "2023-08-18T07:02:20.268065Z",
"shell.execute_reply": "2023-08-18T07:02:20.267212Z"
},
"origin_pos": 7,
"tab": [
"pytorch"
]
},
"outputs": [
{
"data": {
"text/plain": [
"tensor([[4., 5.],\n",
" [7., 8.]])"
]
},
"execution_count": 3,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"X = torch.tensor([[0.0, 1.0, 2.0], [3.0, 4.0, 5.0], [6.0, 7.0, 8.0]])\n",
"pool2d(X, (2, 2))"
]
},
{
"cell_type": "markdown",
"id": "8cc88d86",
"metadata": {
"origin_pos": 8
},
"source": [
"此外,我们还可以(**验证平均汇聚层**)。\n"
]
},
{
"cell_type": "code",
"execution_count": 4,
"id": "4f9a1ffd",
"metadata": {
"execution": {
"iopub.execute_input": "2023-08-18T07:02:20.272001Z",
"iopub.status.busy": "2023-08-18T07:02:20.271411Z",
"iopub.status.idle": "2023-08-18T07:02:20.277849Z",
"shell.execute_reply": "2023-08-18T07:02:20.276928Z"
},
"origin_pos": 9,
"tab": [
"pytorch"
]
},
"outputs": [
{
"data": {
"text/plain": [
"tensor([[2., 3.],\n",
" [5., 6.]])"
]
},
"execution_count": 4,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"pool2d(X, (2, 2), 'avg')"
]
},
{
"cell_type": "markdown",
"id": "447c6999",
"metadata": {
"origin_pos": 10
},
"source": [
"## [**填充和步幅**]\n",
"\n",
"与卷积层一样,汇聚层也可以改变输出形状。和以前一样,我们可以通过填充和步幅以获得所需的输出形状。\n",
"下面,我们用深度学习框架中内置的二维最大汇聚层,来演示汇聚层中填充和步幅的使用。\n",
"我们首先构造了一个输入张量`X`,它有四个维度,其中样本数和通道数都是1。\n"
]
},
{
"cell_type": "code",
"execution_count": 5,
"id": "140d08f5",
"metadata": {
"execution": {
"iopub.execute_input": "2023-08-18T07:02:20.281458Z",
"iopub.status.busy": "2023-08-18T07:02:20.280874Z",
"iopub.status.idle": "2023-08-18T07:02:20.287391Z",
"shell.execute_reply": "2023-08-18T07:02:20.286578Z"
},
"origin_pos": 12,
"tab": [
"pytorch"
]
},
"outputs": [
{
"data": {
"text/plain": [
"tensor([[[[ 0., 1., 2., 3.],\n",
" [ 4., 5., 6., 7.],\n",
" [ 8., 9., 10., 11.],\n",
" [12., 13., 14., 15.]]]])"
]
},
"execution_count": 5,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"X = torch.arange(16, dtype=torch.float32).reshape((1, 1, 4, 4))\n",
"X"
]
},
{
"cell_type": "markdown",
"id": "f95f2492",
"metadata": {
"origin_pos": 15
},
"source": [
"默认情况下,(**深度学习框架中的步幅与汇聚窗口的大小相同**)。\n",
"因此,如果我们使用形状为`(3, 3)`的汇聚窗口,那么默认情况下,我们得到的步幅形状为`(3, 3)`。\n"
]
},
{
"cell_type": "code",
"execution_count": 6,
"id": "a3cc01e3",
"metadata": {
"execution": {
"iopub.execute_input": "2023-08-18T07:02:20.291052Z",
"iopub.status.busy": "2023-08-18T07:02:20.290402Z",
"iopub.status.idle": "2023-08-18T07:02:20.296276Z",
"shell.execute_reply": "2023-08-18T07:02:20.295476Z"
},
"origin_pos": 17,
"tab": [
"pytorch"
]
},
"outputs": [
{
"data": {
"text/plain": [
"tensor([[[[10.]]]])"
]
},
"execution_count": 6,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"pool2d = nn.MaxPool2d(3)\n",
"pool2d(X)"
]
},
{
"cell_type": "markdown",
"id": "0b19d625",
"metadata": {
"origin_pos": 20
},
"source": [
"[**填充和步幅可以手动设定**]。\n"
]
},
{
"cell_type": "code",
"execution_count": 7,
"id": "9c247428",
"metadata": {
"execution": {
"iopub.execute_input": "2023-08-18T07:02:20.299965Z",
"iopub.status.busy": "2023-08-18T07:02:20.299310Z",
"iopub.status.idle": "2023-08-18T07:02:20.307455Z",
"shell.execute_reply": "2023-08-18T07:02:20.306477Z"
},
"origin_pos": 22,
"tab": [
"pytorch"
]
},
"outputs": [
{
"data": {
"text/plain": [
"tensor([[[[ 5., 7.],\n",
" [13., 15.]]]])"
]
},
"execution_count": 7,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"pool2d = nn.MaxPool2d(3, padding=1, stride=2)\n",
"pool2d(X)"
]
},
{
"cell_type": "markdown",
"id": "635b4034",
"metadata": {
"origin_pos": 26,
"tab": [
"pytorch"
]
},
"source": [
"当然,我们可以(**设定一个任意大小的矩形汇聚窗口,并分别设定填充和步幅的高度和宽度**)。\n"
]
},
{
"cell_type": "code",
"execution_count": 8,
"id": "7c169b2f",
"metadata": {
"execution": {
"iopub.execute_input": "2023-08-18T07:02:20.311794Z",
"iopub.status.busy": "2023-08-18T07:02:20.311492Z",
"iopub.status.idle": "2023-08-18T07:02:20.320399Z",
"shell.execute_reply": "2023-08-18T07:02:20.319108Z"
},
"origin_pos": 30,
"tab": [
"pytorch"
]
},
"outputs": [
{
"data": {
"text/plain": [
"tensor([[[[ 5., 7.],\n",
" [13., 15.]]]])"
]
},
"execution_count": 8,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"pool2d = nn.MaxPool2d((2, 3), stride=(2, 3), padding=(0, 1))\n",
"pool2d(X)"
]
},
{
"cell_type": "markdown",
"id": "a893596a",
"metadata": {
"origin_pos": 33
},
"source": [
"## 多个通道\n",
"\n",
"在处理多通道输入数据时,[**汇聚层在每个输入通道上单独运算**],而不是像卷积层一样在通道上对输入进行汇总。\n",
"这意味着汇聚层的输出通道数与输入通道数相同。\n",
"下面,我们将在通道维度上连结张量`X`和`X + 1`,以构建具有2个通道的输入。\n"
]
},
{
"cell_type": "code",
"execution_count": 9,
"id": "c0a30a7f",
"metadata": {
"execution": {
"iopub.execute_input": "2023-08-18T07:02:20.325617Z",
"iopub.status.busy": "2023-08-18T07:02:20.324879Z",
"iopub.status.idle": "2023-08-18T07:02:20.335303Z",
"shell.execute_reply": "2023-08-18T07:02:20.334055Z"
},
"origin_pos": 35,
"tab": [
"pytorch"
]
},
"outputs": [
{
"data": {
"text/plain": [
"tensor([[[[ 0., 1., 2., 3.],\n",
" [ 4., 5., 6., 7.],\n",
" [ 8., 9., 10., 11.],\n",
" [12., 13., 14., 15.]],\n",
"\n",
" [[ 1., 2., 3., 4.],\n",
" [ 5., 6., 7., 8.],\n",
" [ 9., 10., 11., 12.],\n",
" [13., 14., 15., 16.]]]])"
]
},
"execution_count": 9,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"X = torch.cat((X, X + 1), 1)\n",
"X"
]
},
{
"cell_type": "markdown",
"id": "45add004",
"metadata": {
"origin_pos": 37
},
"source": [
"如下所示,汇聚后输出通道的数量仍然是2。\n"
]
},
{
"cell_type": "code",
"execution_count": 10,
"id": "e534c8f3",
"metadata": {
"execution": {
"iopub.execute_input": "2023-08-18T07:02:20.340529Z",
"iopub.status.busy": "2023-08-18T07:02:20.339767Z",
"iopub.status.idle": "2023-08-18T07:02:20.349365Z",
"shell.execute_reply": "2023-08-18T07:02:20.348159Z"
},
"origin_pos": 39,
"tab": [
"pytorch"
]
},
"outputs": [
{
"data": {
"text/plain": [
"tensor([[[[ 5., 7.],\n",
" [13., 15.]],\n",
"\n",
" [[ 6., 8.],\n",
" [14., 16.]]]])"
]
},
"execution_count": 10,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"pool2d = nn.MaxPool2d(3, padding=1, stride=2)\n",
"pool2d(X)"
]
},
{
"cell_type": "markdown",
"id": "0a91fd9f",
"metadata": {
"origin_pos": 43
},
"source": [
"## 小结\n",
"\n",
"* 对于给定输入元素,最大汇聚层会输出该窗口内的最大值,平均汇聚层会输出该窗口内的平均值。\n",
"* 汇聚层的主要优点之一是减轻卷积层对位置的过度敏感。\n",
"* 我们可以指定汇聚层的填充和步幅。\n",
"* 使用最大汇聚层以及大于1的步幅,可减少空间维度(如高度和宽度)。\n",
"* 汇聚层的输出通道数与输入通道数相同。\n",
"\n",
"## 练习\n",
"\n",
"1. 尝试将平均汇聚层作为卷积层的特殊情况实现。\n",
"1. 尝试将最大汇聚层作为卷积层的特殊情况实现。\n",
"1. 假设汇聚层的输入大小为$c\\times h\\times w$,则汇聚窗口的形状为$p_h\\times p_w$,填充为$(p_h, p_w)$,步幅为$(s_h, s_w)$。这个汇聚层的计算成本是多少?\n",
"1. 为什么最大汇聚层和平均汇聚层的工作方式不同?\n",
"1. 我们是否需要最小汇聚层?可以用已知函数替换它吗?\n",
"1. 除了平均汇聚层和最大汇聚层,是否有其它函数可以考虑(提示:回想一下`softmax`)?为什么它不流行?\n"
]
},
{
"cell_type": "markdown",
"id": "f53a8320",
"metadata": {
"origin_pos": 45,
"tab": [
"pytorch"
]
},
"source": [
"[Discussions](https://discuss.d2l.ai/t/1857)\n"
]
}
],
"metadata": {
"language_info": {
"name": "python"
},
"required_libs": []
},
"nbformat": 4,
"nbformat_minor": 5
}
@@ -0,0 +1,172 @@
{
"cells": [
{
"cell_type": "markdown",
"id": "36224718",
"metadata": {
"origin_pos": 0
},
"source": [
"# 从全连接层到卷积\n",
":label:`sec_why-conv`\n",
"\n",
"我们之前讨论的多层感知机十分适合处理表格数据,其中行对应样本,列对应特征。\n",
"对于表格数据,我们寻找的模式可能涉及特征之间的交互,但是我们不能预先假设任何与特征交互相关的先验结构。\n",
"此时,多层感知机可能是最好的选择,然而对于高维感知数据,这种缺少结构的网络可能会变得不实用。\n",
"\n",
"例如,在之前猫狗分类的例子中:假设我们有一个足够充分的照片数据集,数据集中是拥有标注的照片,每张照片具有百万级像素,这意味着网络的每次输入都有一百万个维度。\n",
"即使将隐藏层维度降低到1000,这个全连接层也将有$10^6 \\times 10^3 = 10^9$个参数。\n",
"想要训练这个模型将不可实现,因为需要有大量的GPU、分布式优化训练的经验和超乎常人的耐心。\n",
"\n",
"有些读者可能会反对这个观点,认为要求百万像素的分辨率可能不是必要的。\n",
"然而,即使分辨率减小为十万像素,使用1000个隐藏单元的隐藏层也可能不足以学习到良好的图像特征,在真实的系统中我们仍然需要数十亿个参数。\n",
"此外,拟合如此多的参数还需要收集大量的数据。\n",
"然而,如今人类和机器都能很好地区分猫和狗:这是因为图像中本就拥有丰富的结构,而这些结构可以被人类和机器学习模型使用。\n",
"*卷积神经网络*convolutional neural networksCNN)是机器学习利用自然图像中一些已知结构的创造性方法。\n",
"\n",
"## 不变性\n",
"\n",
"想象一下,假设我们想从一张图片中找到某个物体。\n",
"合理的假设是:无论哪种方法找到这个物体,都应该和物体的位置无关。\n",
"理想情况下,我们的系统应该能够利用常识:猪通常不在天上飞,飞机通常不在水里游泳。\n",
"但是,如果一只猪出现在图片顶部,我们还是应该认出它。\n",
"我们可以从儿童游戏”沃尔多在哪里”( :numref:`img_waldo`)中得到灵感:\n",
"在这个游戏中包含了许多充斥着活动的混乱场景,而沃尔多通常潜伏在一些不太可能的位置,读者的目标就是找出他。\n",
"尽管沃尔多的装扮很有特点,但是在眼花缭乱的场景中找到他也如大海捞针。\n",
"然而沃尔多的样子并不取决于他潜藏的地方,因此我们可以使用一个“沃尔多检测器”扫描图像。\n",
"该检测器将图像分割成多个区域,并为每个区域包含沃尔多的可能性打分。\n",
"卷积神经网络正是将*空间不变性*spatial invariance)的这一概念系统化,从而基于这个模型使用较少的参数来学习有用的表示。\n",
"\n",
"![沃尔多游戏示例图。](../img/where-wally-walker-books.jpg)\n",
":width:`400px`\n",
":label:`img_waldo`\n",
"\n",
"现在,我们将上述想法总结一下,从而帮助我们设计适合于计算机视觉的神经网络架构。\n",
"\n",
"1. *平移不变性*translation invariance):不管检测对象出现在图像中的哪个位置,神经网络的前面几层应该对相同的图像区域具有相似的反应,即为“平移不变性”。\n",
"1. *局部性*locality):神经网络的前面几层应该只探索输入图像中的局部区域,而不过度在意图像中相隔较远区域的关系,这就是“局部性”原则。最终,可以聚合这些局部特征,以在整个图像级别进行预测。\n",
"\n",
"让我们看看这些原则是如何转化为数学表示的。\n",
"\n",
"## 多层感知机的限制\n",
"\n",
"首先,多层感知机的输入是二维图像$\\mathbf{X}$,其隐藏表示$\\mathbf{H}$在数学上是一个矩阵,在代码中表示为二维张量。\n",
"其中$\\mathbf{X}$和$\\mathbf{H}$具有相同的形状。\n",
"为了方便理解,我们可以认为,无论是输入还是隐藏表示都拥有空间结构。\n",
"\n",
"使用$[\\mathbf{X}]_{i, j}$和$[\\mathbf{H}]_{i, j}$分别表示输入图像和隐藏表示中位置($i$,$j$)处的像素。\n",
"为了使每个隐藏神经元都能接收到每个输入像素的信息,我们将参数从权重矩阵(如同我们先前在多层感知机中所做的那样)替换为四阶权重张量$\\mathsf{W}$。假设$\\mathbf{U}$包含偏置参数,我们可以将全连接层形式化地表示为\n",
"\n",
"$$\\begin{aligned} \\left[\\mathbf{H}\\right]_{i, j} &= [\\mathbf{U}]_{i, j} + \\sum_k \\sum_l[\\mathsf{W}]_{i, j, k, l} [\\mathbf{X}]_{k, l}\\\\ &= [\\mathbf{U}]_{i, j} +\n",
"\\sum_a \\sum_b [\\mathsf{V}]_{i, j, a, b} [\\mathbf{X}]_{i+a, j+b}.\\end{aligned}$$\n",
"\n",
"其中,从$\\mathsf{W}$到$\\mathsf{V}$的转换只是形式上的转换,因为在这两个四阶张量的元素之间存在一一对应的关系。\n",
"我们只需重新索引下标$(k, l)$,使$k = i+a$、$l = j+b$,由此可得$[\\mathsf{V}]_{i, j, a, b} = [\\mathsf{W}]_{i, j, i+a, j+b}$。\n",
"索引$a$和$b$通过在正偏移和负偏移之间移动覆盖了整个图像。\n",
"对于隐藏表示中任意给定位置($i$,$j$)处的像素值$[\\mathbf{H}]_{i, j}$,可以通过在$x$中以$(i, j)$为中心对像素进行加权求和得到,加权使用的权重为$[\\mathsf{V}]_{i, j, a, b}$。\n",
"\n",
"### 平移不变性\n",
"\n",
"现在引用上述的第一个原则:平移不变性。\n",
"这意味着检测对象在输入$\\mathbf{X}$中的平移,应该仅导致隐藏表示$\\mathbf{H}$中的平移。也就是说,$\\mathsf{V}$和$\\mathbf{U}$实际上不依赖于$(i, j)$的值,即$[\\mathsf{V}]_{i, j, a, b} = [\\mathbf{V}]_{a, b}$。并且$\\mathbf{U}$是一个常数,比如$u$。因此,我们可以简化$\\mathbf{H}$定义为:\n",
"\n",
"$$[\\mathbf{H}]_{i, j} = u + \\sum_a\\sum_b [\\mathbf{V}]_{a, b} [\\mathbf{X}]_{i+a, j+b}.$$\n",
"\n",
"这就是*卷积*convolution)。我们是在使用系数$[\\mathbf{V}]_{a, b}$对位置$(i, j)$附近的像素$(i+a, j+b)$进行加权得到$[\\mathbf{H}]_{i, j}$。\n",
"注意,$[\\mathbf{V}]_{a, b}$的系数比$[\\mathsf{V}]_{i, j, a, b}$少很多,因为前者不再依赖于图像中的位置。这就是显著的进步!\n",
"\n",
"### 局部性\n",
"\n",
"现在引用上述的第二个原则:局部性。如上所述,为了收集用来训练参数$[\\mathbf{H}]_{i, j}$的相关信息,我们不应偏离到距$(i, j)$很远的地方。这意味着在$|a|> \\Delta$或$|b| > \\Delta$的范围之外,我们可以设置$[\\mathbf{V}]_{a, b} = 0$。因此,我们可以将$[\\mathbf{H}]_{i, j}$重写为\n",
"\n",
"$$[\\mathbf{H}]_{i, j} = u + \\sum_{a = -\\Delta}^{\\Delta} \\sum_{b = -\\Delta}^{\\Delta} [\\mathbf{V}]_{a, b} [\\mathbf{X}]_{i+a, j+b}.$$\n",
":eqlabel:`eq_conv-layer`\n",
"\n",
"简而言之, :eqref:`eq_conv-layer`是一个*卷积层*convolutional layer),而卷积神经网络是包含卷积层的一类特殊的神经网络。\n",
"在深度学习研究社区中,$\\mathbf{V}$被称为*卷积核*convolution kernel)或者*滤波器*(filter),亦或简单地称之为该卷积层的*权重*,通常该权重是可学习的参数。\n",
"当图像处理的局部区域很小时,卷积神经网络与多层感知机的训练差异可能是巨大的:以前,多层感知机可能需要数十亿个参数来表示网络中的一层,而现在卷积神经网络通常只需要几百个参数,而且不需要改变输入或隐藏表示的维数。\n",
"参数大幅减少的代价是,我们的特征现在是平移不变的,并且当确定每个隐藏活性值时,每一层只包含局部的信息。\n",
"以上所有的权重学习都将依赖于归纳偏置。当这种偏置与现实相符时,我们就能得到样本有效的模型,并且这些模型能很好地泛化到未知数据中。\n",
"但如果这偏置与现实不符时,比如当图像不满足平移不变时,我们的模型可能难以拟合我们的训练数据。\n",
"\n",
"## 卷积\n",
"\n",
"在进一步讨论之前,我们先简要回顾一下为什么上面的操作被称为卷积。在数学中,两个函数(比如$f, g: \\mathbb{R}^d \\to \\mathbb{R}$)之间的“卷积”被定义为\n",
"\n",
"$$(f * g)(\\mathbf{x}) = \\int f(\\mathbf{z}) g(\\mathbf{x}-\\mathbf{z}) d\\mathbf{z}.$$\n",
"\n",
"也就是说,卷积是当把一个函数“翻转”并移位$\\mathbf{x}$时,测量$f$和$g$之间的重叠。\n",
"当为离散对象时,积分就变成求和。例如,对于由索引为$\\mathbb{Z}$的、平方可和的、无限维向量集合中抽取的向量,我们得到以下定义:\n",
"\n",
"$$(f * g)(i) = \\sum_a f(a) g(i-a).$$\n",
"\n",
"对于二维张量,则为$f$的索引$(a, b)$和$g$的索引$(i-a, j-b)$上的对应加和:\n",
"\n",
"$$(f * g)(i, j) = \\sum_a\\sum_b f(a, b) g(i-a, j-b).$$\n",
":eqlabel:`eq_2d-conv-discrete`\n",
"\n",
"这看起来类似于 :eqref:`eq_conv-layer`,但有一个主要区别:这里不是使用$(i+a, j+b)$,而是使用差值。然而,这种区别是表面的,因为我们总是可以匹配 :eqref:`eq_conv-layer`和 :eqref:`eq_2d-conv-discrete`之间的符号。我们在 :eqref:`eq_conv-layer`中的原始定义更正确地描述了*互相关*cross-correlation),这个问题将在下一节中讨论。\n",
"\n",
"## “沃尔多在哪里”回顾\n",
"\n",
"回到上面的“沃尔多在哪里”游戏,让我们看看它到底是什么样子。卷积层根据滤波器$\\mathbf{V}$选取给定大小的窗口,并加权处理图片,如 :numref:`fig_waldo_mask`中所示。我们的目标是学习一个模型,以便探测出在“沃尔多”最可能出现的地方。\n",
"\n",
"![发现沃尔多。](../img/waldo-mask.jpg)\n",
":width:`400px`\n",
":label:`fig_waldo_mask`\n",
"\n",
"### 通道\n",
":label:`subsec_why-conv-channels`\n",
"\n",
"然而这种方法有一个问题:我们忽略了图像一般包含三个通道/三种原色(红色、绿色和蓝色)。\n",
"实际上,图像不是二维张量,而是一个由高度、宽度和颜色组成的三维张量,比如包含$1024 \\times 1024 \\times 3$个像素。\n",
"前两个轴与像素的空间位置有关,而第三个轴可以看作每个像素的多维表示。\n",
"因此,我们将$\\mathsf{X}$索引为$[\\mathsf{X}]_{i, j, k}$。由此卷积相应地调整为$[\\mathsf{V}]_{a,b,c}$,而不是$[\\mathbf{V}]_{a,b}$。\n",
"\n",
"此外,由于输入图像是三维的,我们的隐藏表示$\\mathsf{H}$也最好采用三维张量。\n",
"换句话说,对于每一个空间位置,我们想要采用一组而不是一个隐藏表示。这样一组隐藏表示可以想象成一些互相堆叠的二维网格。\n",
"因此,我们可以把隐藏表示想象为一系列具有二维张量的*通道*(channel)。\n",
"这些通道有时也被称为*特征映射*(feature maps),因为每个通道都向后续层提供一组空间化的学习特征。\n",
"直观上可以想象在靠近输入的底层,一些通道专门识别边缘,而一些通道专门识别纹理。\n",
"\n",
"为了支持输入$\\mathsf{X}$和隐藏表示$\\mathsf{H}$中的多个通道,我们可以在$\\mathsf{V}$中添加第四个坐标,即$[\\mathsf{V}]_{a, b, c, d}$。综上所述,\n",
"\n",
"$$[\\mathsf{H}]_{i,j,d} = \\sum_{a = -\\Delta}^{\\Delta} \\sum_{b = -\\Delta}^{\\Delta} \\sum_c [\\mathsf{V}]_{a, b, c, d} [\\mathsf{X}]_{i+a, j+b, c},$$\n",
":eqlabel:`eq_conv-layer-channels`\n",
"\n",
"其中隐藏表示$\\mathsf{H}$中的索引$d$表示输出通道,而随后的输出将继续以三维张量$\\mathsf{H}$作为输入进入下一个卷积层。\n",
"所以, :eqref:`eq_conv-layer-channels`可以定义具有多个通道的卷积层,而其中$\\mathsf{V}$是该卷积层的权重。\n",
"\n",
"然而,仍有许多问题亟待解决。\n",
"例如,图像中是否到处都有存在沃尔多的可能?如何有效地计算输出层?如何选择适当的激活函数?为了训练有效的网络,如何做出合理的网络设计选择?我们将在本章的其它部分讨论这些问题。\n",
"\n",
"## 小结\n",
"\n",
"- 图像的平移不变性使我们以相同的方式处理局部图像,而不在乎它的位置。\n",
"- 局部性意味着计算相应的隐藏表示只需一小部分局部图像像素。\n",
"- 在图像处理中,卷积层通常比全连接层需要更少的参数,但依旧获得高效用的模型。\n",
"- 卷积神经网络(CNN)是一类特殊的神经网络,它可以包含多个卷积层。\n",
"- 多个输入和输出通道使模型在每个空间位置可以获取图像的多方面特征。\n",
"\n",
"## 练习\n",
"\n",
"1. 假设卷积层 :eqref:`eq_conv-layer`覆盖的局部区域$\\Delta = 0$。在这种情况下,证明卷积内核为每组通道独立地实现一个全连接层。\n",
"1. 为什么平移不变性可能也不是好主意呢?\n",
"1. 当从图像边界像素获取隐藏表示时,我们需要思考哪些问题?\n",
"1. 描述一个类似的音频卷积层的架构。\n",
"1. 卷积层也适合于文本数据吗?为什么?\n",
"1. 证明在 :eqref:`eq_2d-conv-discrete`中,$f * g = g * f$。\n",
"\n",
"[Discussions](https://discuss.d2l.ai/t/5767)\n"
]
}
],
"metadata": {
"language_info": {
"name": "python"
},
"required_libs": []
},
"nbformat": 4,
"nbformat_minor": 5
}
@@ -0,0 +1,404 @@
{
"cells": [
{
"cell_type": "markdown",
"id": "f66f7a20",
"metadata": {
"origin_pos": 0
},
"source": [
"# 自定义层\n",
"\n",
"深度学习成功背后的一个因素是神经网络的灵活性:\n",
"我们可以用创造性的方式组合不同的层,从而设计出适用于各种任务的架构。\n",
"例如,研究人员发明了专门用于处理图像、文本、序列数据和执行动态规划的层。\n",
"有时我们会遇到或要自己发明一个现在在深度学习框架中还不存在的层。\n",
"在这些情况下,必须构建自定义层。本节将展示如何构建自定义层。\n",
"\n",
"## 不带参数的层\n",
"\n",
"首先,我们(**构造一个没有任何参数的自定义层**)。\n",
"回忆一下在 :numref:`sec_model_construction`对块的介绍,\n",
"这应该看起来很眼熟。\n",
"下面的`CenteredLayer`类要从其输入中减去均值。\n",
"要构建它,我们只需继承基础层类并实现前向传播功能。\n"
]
},
{
"cell_type": "code",
"execution_count": 1,
"id": "cc3b353a",
"metadata": {
"execution": {
"iopub.execute_input": "2023-08-18T07:07:16.604374Z",
"iopub.status.busy": "2023-08-18T07:07:16.603752Z",
"iopub.status.idle": "2023-08-18T07:07:17.492480Z",
"shell.execute_reply": "2023-08-18T07:07:17.491482Z"
},
"origin_pos": 2,
"tab": [
"pytorch"
]
},
"outputs": [],
"source": [
"import torch\n",
"import torch.nn.functional as F\n",
"from torch import nn\n",
"\n",
"\n",
"class CenteredLayer(nn.Module):\n",
" def __init__(self):\n",
" super().__init__()\n",
"\n",
" def forward(self, X):\n",
" return X - X.mean()"
]
},
{
"cell_type": "markdown",
"id": "a3c321cf",
"metadata": {
"origin_pos": 5
},
"source": [
"让我们向该层提供一些数据,验证它是否能按预期工作。\n"
]
},
{
"cell_type": "code",
"execution_count": 2,
"id": "dec68045",
"metadata": {
"execution": {
"iopub.execute_input": "2023-08-18T07:07:17.497408Z",
"iopub.status.busy": "2023-08-18T07:07:17.497077Z",
"iopub.status.idle": "2023-08-18T07:07:17.508357Z",
"shell.execute_reply": "2023-08-18T07:07:17.507175Z"
},
"origin_pos": 7,
"tab": [
"pytorch"
]
},
"outputs": [
{
"data": {
"text/plain": [
"tensor([-2., -1., 0., 1., 2.])"
]
},
"execution_count": 2,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"layer = CenteredLayer()\n",
"layer(torch.FloatTensor([1, 2, 3, 4, 5]))"
]
},
{
"cell_type": "markdown",
"id": "9d38600d",
"metadata": {
"origin_pos": 10
},
"source": [
"现在,我们可以[**将层作为组件合并到更复杂的模型中**]。\n"
]
},
{
"cell_type": "code",
"execution_count": 3,
"id": "1b903c3c",
"metadata": {
"execution": {
"iopub.execute_input": "2023-08-18T07:07:17.513247Z",
"iopub.status.busy": "2023-08-18T07:07:17.512547Z",
"iopub.status.idle": "2023-08-18T07:07:17.518968Z",
"shell.execute_reply": "2023-08-18T07:07:17.517886Z"
},
"origin_pos": 12,
"tab": [
"pytorch"
]
},
"outputs": [],
"source": [
"net = nn.Sequential(nn.Linear(8, 128), CenteredLayer())"
]
},
{
"cell_type": "markdown",
"id": "4c48076d",
"metadata": {
"origin_pos": 14
},
"source": [
"作为额外的健全性检查,我们可以在向该网络发送随机数据后,检查均值是否为0。\n",
"由于我们处理的是浮点数,因为存储精度的原因,我们仍然可能会看到一个非常小的非零数。\n"
]
},
{
"cell_type": "code",
"execution_count": 4,
"id": "6ab302a0",
"metadata": {
"execution": {
"iopub.execute_input": "2023-08-18T07:07:17.523517Z",
"iopub.status.busy": "2023-08-18T07:07:17.523140Z",
"iopub.status.idle": "2023-08-18T07:07:17.534718Z",
"shell.execute_reply": "2023-08-18T07:07:17.533593Z"
},
"origin_pos": 16,
"tab": [
"pytorch"
]
},
"outputs": [
{
"data": {
"text/plain": [
"tensor(7.4506e-09, grad_fn=<MeanBackward0>)"
]
},
"execution_count": 4,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"Y = net(torch.rand(4, 8))\n",
"Y.mean()"
]
},
{
"cell_type": "markdown",
"id": "ca107571",
"metadata": {
"origin_pos": 19
},
"source": [
"## [**带参数的层**]\n",
"\n",
"以上我们知道了如何定义简单的层,下面我们继续定义具有参数的层,\n",
"这些参数可以通过训练进行调整。\n",
"我们可以使用内置函数来创建参数,这些函数提供一些基本的管理功能。\n",
"比如管理访问、初始化、共享、保存和加载模型参数。\n",
"这样做的好处之一是:我们不需要为每个自定义层编写自定义的序列化程序。\n",
"\n",
"现在,让我们实现自定义版本的全连接层。\n",
"回想一下,该层需要两个参数,一个用于表示权重,另一个用于表示偏置项。\n",
"在此实现中,我们使用修正线性单元作为激活函数。\n",
"该层需要输入参数:`in_units`和`units`,分别表示输入数和输出数。\n"
]
},
{
"cell_type": "code",
"execution_count": 5,
"id": "8c4a7999",
"metadata": {
"execution": {
"iopub.execute_input": "2023-08-18T07:07:17.539101Z",
"iopub.status.busy": "2023-08-18T07:07:17.538729Z",
"iopub.status.idle": "2023-08-18T07:07:17.546162Z",
"shell.execute_reply": "2023-08-18T07:07:17.545105Z"
},
"origin_pos": 21,
"tab": [
"pytorch"
]
},
"outputs": [],
"source": [
"class MyLinear(nn.Module):\n",
" def __init__(self, in_units, units):\n",
" super().__init__()\n",
" self.weight = nn.Parameter(torch.randn(in_units, units))\n",
" self.bias = nn.Parameter(torch.randn(units,))\n",
" def forward(self, X):\n",
" linear = torch.matmul(X, self.weight.data) + self.bias.data\n",
" return F.relu(linear)"
]
},
{
"cell_type": "markdown",
"id": "442183c6",
"metadata": {
"origin_pos": 25,
"tab": [
"pytorch"
]
},
"source": [
"接下来,我们实例化`MyLinear`类并访问其模型参数。\n"
]
},
{
"cell_type": "code",
"execution_count": 6,
"id": "4490005a",
"metadata": {
"execution": {
"iopub.execute_input": "2023-08-18T07:07:17.550522Z",
"iopub.status.busy": "2023-08-18T07:07:17.550152Z",
"iopub.status.idle": "2023-08-18T07:07:17.558364Z",
"shell.execute_reply": "2023-08-18T07:07:17.557338Z"
},
"origin_pos": 28,
"tab": [
"pytorch"
]
},
"outputs": [
{
"data": {
"text/plain": [
"Parameter containing:\n",
"tensor([[ 0.1775, -1.4539, 0.3972],\n",
" [-0.1339, 0.5273, 1.3041],\n",
" [-0.3327, -0.2337, -0.6334],\n",
" [ 1.2076, -0.3937, 0.6851],\n",
" [-0.4716, 0.0894, -0.9195]], requires_grad=True)"
]
},
"execution_count": 6,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"linear = MyLinear(5, 3)\n",
"linear.weight"
]
},
{
"cell_type": "markdown",
"id": "7dcc8fd9",
"metadata": {
"origin_pos": 30
},
"source": [
"我们可以[**使用自定义层直接执行前向传播计算**]。\n"
]
},
{
"cell_type": "code",
"execution_count": 7,
"id": "25f2aabf",
"metadata": {
"execution": {
"iopub.execute_input": "2023-08-18T07:07:17.562706Z",
"iopub.status.busy": "2023-08-18T07:07:17.562337Z",
"iopub.status.idle": "2023-08-18T07:07:17.570015Z",
"shell.execute_reply": "2023-08-18T07:07:17.568916Z"
},
"origin_pos": 32,
"tab": [
"pytorch"
]
},
"outputs": [
{
"data": {
"text/plain": [
"tensor([[0., 0., 0.],\n",
" [0., 0., 0.]])"
]
},
"execution_count": 7,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"linear(torch.rand(2, 5))"
]
},
{
"cell_type": "markdown",
"id": "c92ac1e0",
"metadata": {
"origin_pos": 35
},
"source": [
"我们还可以(**使用自定义层构建模型**),就像使用内置的全连接层一样使用自定义层。\n"
]
},
{
"cell_type": "code",
"execution_count": 8,
"id": "fb2953e8",
"metadata": {
"execution": {
"iopub.execute_input": "2023-08-18T07:07:17.574378Z",
"iopub.status.busy": "2023-08-18T07:07:17.574000Z",
"iopub.status.idle": "2023-08-18T07:07:17.582792Z",
"shell.execute_reply": "2023-08-18T07:07:17.581735Z"
},
"origin_pos": 37,
"tab": [
"pytorch"
]
},
"outputs": [
{
"data": {
"text/plain": [
"tensor([[0.],\n",
" [0.]])"
]
},
"execution_count": 8,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"net = nn.Sequential(MyLinear(64, 8), MyLinear(8, 1))\n",
"net(torch.rand(2, 64))"
]
},
{
"cell_type": "markdown",
"id": "5a23d1ab",
"metadata": {
"origin_pos": 40
},
"source": [
"## 小结\n",
"\n",
"* 我们可以通过基本层类设计自定义层。这允许我们定义灵活的新层,其行为与深度学习框架中的任何现有层不同。\n",
"* 在自定义层定义完成后,我们就可以在任意环境和网络架构中调用该自定义层。\n",
"* 层可以有局部参数,这些参数可以通过内置函数创建。\n",
"\n",
"## 练习\n",
"\n",
"1. 设计一个接受输入并计算张量降维的层,它返回$y_k = \\sum_{i, j} W_{ijk} x_i x_j$。\n",
"1. 设计一个返回输入数据的傅立叶系数前半部分的层。\n"
]
},
{
"cell_type": "markdown",
"id": "2d5d22c2",
"metadata": {
"origin_pos": 42,
"tab": [
"pytorch"
]
},
"source": [
"[Discussions](https://discuss.d2l.ai/t/1835)\n"
]
}
],
"metadata": {
"language_info": {
"name": "python"
},
"required_libs": []
},
"nbformat": 4,
"nbformat_minor": 5
}
@@ -0,0 +1,103 @@
{
"cells": [
{
"cell_type": "markdown",
"id": "59a11c8e",
"metadata": {
"origin_pos": 0
},
"source": [
"# 延后初始化\n",
":label:`sec_deferred_init`\n",
"\n",
"到目前为止,我们忽略了建立网络时需要做的以下这些事情:\n",
"\n",
"* 我们定义了网络架构,但没有指定输入维度。\n",
"* 我们添加层时没有指定前一层的输出维度。\n",
"* 我们在初始化参数时,甚至没有足够的信息来确定模型应该包含多少参数。\n",
"\n",
"有些读者可能会对我们的代码能运行感到惊讶。\n",
"毕竟,深度学习框架无法判断网络的输入维度是什么。\n",
"这里的诀窍是框架的*延后初始化*defers initialization),\n",
"即直到数据第一次通过模型传递时,框架才会动态地推断出每个层的大小。\n",
"\n",
"在以后,当使用卷积神经网络时,\n",
"由于输入维度(即图像的分辨率)将影响每个后续层的维数,\n",
"有了该技术将更加方便。\n",
"现在我们在编写代码时无须知道维度是什么就可以设置参数,\n",
"这种能力可以大大简化定义和修改模型的任务。\n",
"接下来,我们将更深入地研究初始化机制。\n",
"\n",
"## 实例化网络\n",
"\n",
"首先,让我们实例化一个多层感知机。\n"
]
},
{
"cell_type": "markdown",
"id": "1d75086b",
"metadata": {
"origin_pos": 3
},
"source": [
"此时,因为输入维数是未知的,所以网络不可能知道输入层权重的维数。\n",
"因此,框架尚未初始化任何参数,我们通过尝试访问以下参数进行确认。\n"
]
},
{
"cell_type": "markdown",
"id": "82b701e3",
"metadata": {
"origin_pos": 10
},
"source": [
"接下来让我们将数据通过网络,最终使框架初始化参数。\n"
]
},
{
"cell_type": "markdown",
"id": "094382a3",
"metadata": {
"origin_pos": 13
},
"source": [
"一旦我们知道输入维数是20,框架可以通过代入值20来识别第一层权重矩阵的形状。\n",
"识别出第一层的形状后,框架处理第二层,依此类推,直到所有形状都已知为止。\n",
"注意,在这种情况下,只有第一层需要延迟初始化,但是框架仍是按顺序初始化的。\n",
"等到知道了所有的参数形状,框架就可以初始化参数。\n",
"\n",
"## 小结\n",
"\n",
"* 延后初始化使框架能够自动推断参数形状,使修改模型架构变得容易,避免了一些常见的错误。\n",
"* 我们可以通过模型传递数据,使框架最终初始化参数。\n",
"\n",
"## 练习\n",
"\n",
"1. 如果指定了第一层的输入尺寸,但没有指定后续层的尺寸,会发生什么?是否立即进行初始化?\n",
"1. 如果指定了不匹配的维度会发生什么?\n",
"1. 如果输入具有不同的维度,需要做什么?提示:查看参数绑定的相关内容。\n"
]
},
{
"cell_type": "markdown",
"id": "7ed4b454",
"metadata": {
"origin_pos": 15,
"tab": [
"pytorch"
]
},
"source": [
"[Discussions](https://discuss.d2l.ai/t/5770)\n"
]
}
],
"metadata": {
"language_info": {
"name": "python"
},
"required_libs": []
},
"nbformat": 4,
"nbformat_minor": 5
}
@@ -0,0 +1,55 @@
{
"cells": [
{
"cell_type": "markdown",
"id": "5cf9fbc8",
"metadata": {
"origin_pos": 0
},
"source": [
"# 深度学习计算\n",
":label:`chap_computation`\n",
"\n",
"除了庞大的数据集和强大的硬件,\n",
"优秀的软件工具在深度学习的快速发展中发挥了不可或缺的作用。\n",
"从2007年发布的开创性的Theano库开始,\n",
"灵活的开源工具使研究人员能够快速开发模型原型,\n",
"避免了我们使用标准组件时的重复工作,\n",
"同时仍然保持了我们进行底层修改的能力。\n",
"随着时间的推移,深度学习库已经演变成提供越来越粗糙的抽象。\n",
"就像半导体设计师从指定晶体管到逻辑电路再到编写代码一样,\n",
"神经网络研究人员已经从考虑单个人工神经元的行为转变为从层的角度构思网络,\n",
"通常在设计架构时考虑的是更粗糙的块(block)。\n",
"\n",
"之前我们已经介绍了一些基本的机器学习概念,\n",
"并慢慢介绍了功能齐全的深度学习模型。\n",
"在上一章中,我们从零开始实现了多层感知机的每个组件,\n",
"然后展示了如何利用高级API轻松地实现相同的模型。\n",
"为了易于学习,我们调用了深度学习库,但是跳过了它们工作的细节。\n",
"在本章中,我们将深入探索深度学习计算的关键组件,\n",
"即模型构建、参数访问与初始化、设计自定义层和块、将模型读写到磁盘,\n",
"以及利用GPU实现显著的加速。\n",
"这些知识将使读者从深度学习“基础用户”变为“高级用户”。\n",
"虽然本章不介绍任何新的模型或数据集,\n",
"但后面的高级模型章节在很大程度上依赖于本章的知识。\n",
"\n",
":begin_tab:toc\n",
" - [model-construction](model-construction.ipynb)\n",
" - [parameters](parameters.ipynb)\n",
" - [deferred-init](deferred-init.ipynb)\n",
" - [custom-layer](custom-layer.ipynb)\n",
" - [read-write](read-write.ipynb)\n",
" - [use-gpu](use-gpu.ipynb)\n",
":end_tab:\n"
]
}
],
"metadata": {
"language_info": {
"name": "python"
},
"required_libs": []
},
"nbformat": 4,
"nbformat_minor": 5
}
@@ -0,0 +1,646 @@
{
"cells": [
{
"cell_type": "markdown",
"id": "1dca9252",
"metadata": {
"origin_pos": 0
},
"source": [
"# 层和块\n",
":label:`sec_model_construction`\n",
"\n",
"之前首次介绍神经网络时,我们关注的是具有单一输出的线性模型。\n",
"在这里,整个模型只有一个输出。\n",
"注意,单个神经网络\n",
"1)接受一些输入;\n",
"2)生成相应的标量输出;\n",
"3)具有一组相关 *参数*(parameters),更新这些参数可以优化某目标函数。\n",
"\n",
"然后,当考虑具有多个输出的网络时,\n",
"我们利用矢量化算法来描述整层神经元。\n",
"像单个神经元一样,层(1)接受一组输入,\n",
"2)生成相应的输出,\n",
"(3)由一组可调整参数描述。\n",
"当我们使用softmax回归时,一个单层本身就是模型。\n",
"然而,即使我们随后引入了多层感知机,我们仍然可以认为该模型保留了上面所说的基本架构。\n",
"\n",
"对于多层感知机而言,整个模型及其组成层都是这种架构。\n",
"整个模型接受原始输入(特征),生成输出(预测),\n",
"并包含一些参数(所有组成层的参数集合)。\n",
"同样,每个单独的层接收输入(由前一层提供),\n",
"生成输出(到下一层的输入),并且具有一组可调参数,\n",
"这些参数根据从下一层反向传播的信号进行更新。\n",
"\n",
"事实证明,研究讨论“比单个层大”但“比整个模型小”的组件更有价值。\n",
"例如,在计算机视觉中广泛流行的ResNet-152架构就有数百层,\n",
"这些层是由*层组*groups of layers)的重复模式组成。\n",
"这个ResNet架构赢得了2015年ImageNet和COCO计算机视觉比赛\n",
"的识别和检测任务 :cite:`He.Zhang.Ren.ea.2016`。\n",
"目前ResNet架构仍然是许多视觉任务的首选架构。\n",
"在其他的领域,如自然语言处理和语音,\n",
"层组以各种重复模式排列的类似架构现在也是普遍存在。\n",
"\n",
"为了实现这些复杂的网络,我们引入了神经网络*块*的概念。\n",
"*块*(block)可以描述单个层、由多个层组成的组件或整个模型本身。\n",
"使用块进行抽象的一个好处是可以将一些块组合成更大的组件,\n",
"这一过程通常是递归的,如 :numref:`fig_blocks`所示。\n",
"通过定义代码来按需生成任意复杂度的块,\n",
"我们可以通过简洁的代码实现复杂的神经网络。\n",
"\n",
"![多个层被组合成块,形成更大的模型](../img/blocks.svg)\n",
":label:`fig_blocks`\n",
"\n",
"从编程的角度来看,块由*类*class)表示。\n",
"它的任何子类都必须定义一个将其输入转换为输出的前向传播函数,\n",
"并且必须存储任何必需的参数。\n",
"注意,有些块不需要任何参数。\n",
"最后,为了计算梯度,块必须具有反向传播函数。\n",
"在定义我们自己的块时,由于自动微分(在 :numref:`sec_autograd` 中引入)\n",
"提供了一些后端实现,我们只需要考虑前向传播函数和必需的参数。\n",
"\n",
"在构造自定义块之前,(**我们先回顾一下多层感知机**)\n",
" :numref:`sec_mlp_concise` )的代码。\n",
"下面的代码生成一个网络,其中包含一个具有256个单元和ReLU激活函数的全连接隐藏层,\n",
"然后是一个具有10个隐藏单元且不带激活函数的全连接输出层。\n"
]
},
{
"cell_type": "code",
"execution_count": 1,
"id": "9895e279",
"metadata": {
"execution": {
"iopub.execute_input": "2023-08-18T06:57:00.244437Z",
"iopub.status.busy": "2023-08-18T06:57:00.243813Z",
"iopub.status.idle": "2023-08-18T06:57:01.320999Z",
"shell.execute_reply": "2023-08-18T06:57:01.320186Z"
},
"origin_pos": 2,
"tab": [
"pytorch"
]
},
"outputs": [
{
"data": {
"text/plain": [
"tensor([[ 0.0343, 0.0264, 0.2505, -0.0243, 0.0945, 0.0012, -0.0141, 0.0666,\n",
" -0.0547, -0.0667],\n",
" [ 0.0772, -0.0274, 0.2638, -0.0191, 0.0394, -0.0324, 0.0102, 0.0707,\n",
" -0.1481, -0.1031]], grad_fn=<AddmmBackward0>)"
]
},
"execution_count": 1,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"import torch\n",
"from torch import nn\n",
"from torch.nn import functional as F\n",
"\n",
"net = nn.Sequential(nn.Linear(20, 256), nn.ReLU(), nn.Linear(256, 10))\n",
"\n",
"X = torch.rand(2, 20)\n",
"net(X)"
]
},
{
"cell_type": "markdown",
"id": "be949c0e",
"metadata": {
"origin_pos": 6,
"tab": [
"pytorch"
]
},
"source": [
"在这个例子中,我们通过实例化`nn.Sequential`来构建我们的模型,\n",
"层的执行顺序是作为参数传递的。\n",
"简而言之,(**`nn.Sequential`定义了一种特殊的`Module`**)\n",
"即在PyTorch中表示一个块的类,\n",
"它维护了一个由`Module`组成的有序列表。\n",
"注意,两个全连接层都是`Linear`类的实例,\n",
"`Linear`类本身就是`Module`的子类。\n",
"另外,到目前为止,我们一直在通过`net(X)`调用我们的模型来获得模型的输出。\n",
"这实际上是`net.__call__(X)`的简写。\n",
"这个前向传播函数非常简单:\n",
"它将列表中的每个块连接在一起,将每个块的输出作为下一个块的输入。\n"
]
},
{
"cell_type": "markdown",
"id": "a3ce5ce8",
"metadata": {
"origin_pos": 9
},
"source": [
"## [**自定义块**]\n",
"\n",
"要想直观地了解块是如何工作的,最简单的方法就是自己实现一个。\n",
"在实现我们自定义块之前,我们简要总结一下每个块必须提供的基本功能。\n"
]
},
{
"cell_type": "markdown",
"id": "24ea84f7",
"metadata": {
"origin_pos": 11,
"tab": [
"pytorch"
]
},
"source": [
"1. 将输入数据作为其前向传播函数的参数。\n",
"1. 通过前向传播函数来生成输出。请注意,输出的形状可能与输入的形状不同。例如,我们上面模型中的第一个全连接的层接收一个20维的输入,但是返回一个维度为256的输出。\n",
"1. 计算其输出关于输入的梯度,可通过其反向传播函数进行访问。通常这是自动发生的。\n",
"1. 存储和访问前向传播计算所需的参数。\n",
"1. 根据需要初始化模型参数。\n"
]
},
{
"cell_type": "markdown",
"id": "572894df",
"metadata": {
"origin_pos": 12
},
"source": [
"在下面的代码片段中,我们从零开始编写一个块。\n",
"它包含一个多层感知机,其具有256个隐藏单元的隐藏层和一个10维输出层。\n",
"注意,下面的`MLP`类继承了表示块的类。\n",
"我们的实现只需要提供我们自己的构造函数(Python中的`__init__`函数)和前向传播函数。\n"
]
},
{
"cell_type": "code",
"execution_count": 2,
"id": "876df867",
"metadata": {
"execution": {
"iopub.execute_input": "2023-08-18T06:57:01.325541Z",
"iopub.status.busy": "2023-08-18T06:57:01.324828Z",
"iopub.status.idle": "2023-08-18T06:57:01.330411Z",
"shell.execute_reply": "2023-08-18T06:57:01.329591Z"
},
"origin_pos": 14,
"tab": [
"pytorch"
]
},
"outputs": [],
"source": [
"class MLP(nn.Module):\n",
" # 用模型参数声明层。这里,我们声明两个全连接的层\n",
" def __init__(self):\n",
" # 调用MLP的父类Module的构造函数来执行必要的初始化。\n",
" # 这样,在类实例化时也可以指定其他函数参数,例如模型参数params(稍后将介绍)\n",
" super().__init__()\n",
" self.hidden = nn.Linear(20, 256) # 隐藏层\n",
" self.out = nn.Linear(256, 10) # 输出层\n",
"\n",
" # 定义模型的前向传播,即如何根据输入X返回所需的模型输出\n",
" def forward(self, X):\n",
" # 注意,这里我们使用ReLU的函数版本,其在nn.functional模块中定义。\n",
" return self.out(F.relu(self.hidden(X)))"
]
},
{
"cell_type": "markdown",
"id": "8327a09c",
"metadata": {
"origin_pos": 17
},
"source": [
"我们首先看一下前向传播函数,它以`X`作为输入,\n",
"计算带有激活函数的隐藏表示,并输出其未规范化的输出值。\n",
"在这个`MLP`实现中,两个层都是实例变量。\n",
"要了解这为什么是合理的,可以想象实例化两个多层感知机(`net1`和`net2`),\n",
"并根据不同的数据对它们进行训练。\n",
"当然,我们希望它们学到两种不同的模型。\n",
"\n",
"接着我们[**实例化多层感知机的层,然后在每次调用前向传播函数时调用这些层**]。\n",
"注意一些关键细节:\n",
"首先,我们定制的`__init__`函数通过`super().__init__()`\n",
"调用父类的`__init__`函数,\n",
"省去了重复编写模版代码的痛苦。\n",
"然后,我们实例化两个全连接层,\n",
"分别为`self.hidden`和`self.out`。\n",
"注意,除非我们实现一个新的运算符,\n",
"否则我们不必担心反向传播函数或参数初始化,\n",
"系统将自动生成这些。\n",
"\n",
"我们来试一下这个函数:\n"
]
},
{
"cell_type": "code",
"execution_count": 3,
"id": "f7a34ec3",
"metadata": {
"execution": {
"iopub.execute_input": "2023-08-18T06:57:01.334346Z",
"iopub.status.busy": "2023-08-18T06:57:01.333603Z",
"iopub.status.idle": "2023-08-18T06:57:01.340473Z",
"shell.execute_reply": "2023-08-18T06:57:01.339676Z"
},
"origin_pos": 19,
"tab": [
"pytorch"
]
},
"outputs": [
{
"data": {
"text/plain": [
"tensor([[ 0.0669, 0.2202, -0.0912, -0.0064, 0.1474, -0.0577, -0.3006, 0.1256,\n",
" -0.0280, 0.4040],\n",
" [ 0.0545, 0.2591, -0.0297, 0.1141, 0.1887, 0.0094, -0.2686, 0.0732,\n",
" -0.0135, 0.3865]], grad_fn=<AddmmBackward0>)"
]
},
"execution_count": 3,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"net = MLP()\n",
"net(X)"
]
},
{
"cell_type": "markdown",
"id": "37aaa7fc",
"metadata": {
"origin_pos": 21
},
"source": [
"块的一个主要优点是它的多功能性。\n",
"我们可以子类化块以创建层(如全连接层的类)、\n",
"整个模型(如上面的`MLP`类)或具有中等复杂度的各种组件。\n",
"我们在接下来的章节中充分利用了这种多功能性,\n",
"比如在处理卷积神经网络时。\n",
"\n",
"## [**顺序块**]\n",
"\n",
"现在我们可以更仔细地看看`Sequential`类是如何工作的,\n",
"回想一下`Sequential`的设计是为了把其他模块串起来。\n",
"为了构建我们自己的简化的`MySequential`\n",
"我们只需要定义两个关键函数:\n",
"\n",
"1. 一种将块逐个追加到列表中的函数;\n",
"1. 一种前向传播函数,用于将输入按追加块的顺序传递给块组成的“链条”。\n",
"\n",
"下面的`MySequential`类提供了与默认`Sequential`类相同的功能。\n"
]
},
{
"cell_type": "code",
"execution_count": 4,
"id": "dd09709c",
"metadata": {
"execution": {
"iopub.execute_input": "2023-08-18T06:57:01.344392Z",
"iopub.status.busy": "2023-08-18T06:57:01.343695Z",
"iopub.status.idle": "2023-08-18T06:57:01.349458Z",
"shell.execute_reply": "2023-08-18T06:57:01.348481Z"
},
"origin_pos": 23,
"tab": [
"pytorch"
]
},
"outputs": [],
"source": [
"class MySequential(nn.Module):\n",
" def __init__(self, *args):\n",
" super().__init__()\n",
" for idx, module in enumerate(args):\n",
" # 这里,module是Module子类的一个实例。我们把它保存在'Module'类的成员\n",
" # 变量_modules中。_module的类型是OrderedDict\n",
" self._modules[str(idx)] = module\n",
"\n",
" def forward(self, X):\n",
" # OrderedDict保证了按照成员添加的顺序遍历它们\n",
" for block in self._modules.values():\n",
" X = block(X)\n",
" return X"
]
},
{
"cell_type": "markdown",
"id": "2a44d091",
"metadata": {
"origin_pos": 27,
"tab": [
"pytorch"
]
},
"source": [
"`__init__`函数将每个模块逐个添加到有序字典`_modules`中。\n",
"读者可能会好奇为什么每个`Module`都有一个`_modules`属性?\n",
"以及为什么我们使用它而不是自己定义一个Python列表?\n",
"简而言之,`_modules`的主要优点是:\n",
"在模块的参数初始化过程中,\n",
"系统知道在`_modules`字典中查找需要初始化参数的子块。\n"
]
},
{
"cell_type": "markdown",
"id": "0272bce5",
"metadata": {
"origin_pos": 29
},
"source": [
"当`MySequential`的前向传播函数被调用时,\n",
"每个添加的块都按照它们被添加的顺序执行。\n",
"现在可以使用我们的`MySequential`类重新实现多层感知机。\n"
]
},
{
"cell_type": "code",
"execution_count": 5,
"id": "9672de9a",
"metadata": {
"execution": {
"iopub.execute_input": "2023-08-18T06:57:01.353302Z",
"iopub.status.busy": "2023-08-18T06:57:01.352727Z",
"iopub.status.idle": "2023-08-18T06:57:01.360268Z",
"shell.execute_reply": "2023-08-18T06:57:01.359462Z"
},
"origin_pos": 31,
"tab": [
"pytorch"
]
},
"outputs": [
{
"data": {
"text/plain": [
"tensor([[ 2.2759e-01, -4.7003e-02, 4.2846e-01, -1.2546e-01, 1.5296e-01,\n",
" 1.8972e-01, 9.7048e-02, 4.5479e-04, -3.7986e-02, 6.4842e-02],\n",
" [ 2.7825e-01, -9.7517e-02, 4.8541e-01, -2.4519e-01, -8.4580e-02,\n",
" 2.8538e-01, 3.6861e-02, 2.9411e-02, -1.0612e-01, 1.2620e-01]],\n",
" grad_fn=<AddmmBackward0>)"
]
},
"execution_count": 5,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"net = MySequential(nn.Linear(20, 256), nn.ReLU(), nn.Linear(256, 10))\n",
"net(X)"
]
},
{
"cell_type": "markdown",
"id": "189aa472",
"metadata": {
"origin_pos": 33
},
"source": [
"请注意,`MySequential`的用法与之前为`Sequential`类编写的代码相同\n",
"(如 :numref:`sec_mlp_concise` 中所述)。\n",
"\n",
"## [**在前向传播函数中执行代码**]\n",
"\n",
"`Sequential`类使模型构造变得简单,\n",
"允许我们组合新的架构,而不必定义自己的类。\n",
"然而,并不是所有的架构都是简单的顺序架构。\n",
"当需要更强的灵活性时,我们需要定义自己的块。\n",
"例如,我们可能希望在前向传播函数中执行Python的控制流。\n",
"此外,我们可能希望执行任意的数学运算,\n",
"而不是简单地依赖预定义的神经网络层。\n",
"\n",
"到目前为止,\n",
"我们网络中的所有操作都对网络的激活值及网络的参数起作用。\n",
"然而,有时我们可能希望合并既不是上一层的结果也不是可更新参数的项,\n",
"我们称之为*常数参数*constant parameter)。\n",
"例如,我们需要一个计算函数\n",
"$f(\\mathbf{x},\\mathbf{w}) = c \\cdot \\mathbf{w}^\\top \\mathbf{x}$的层,\n",
"其中$\\mathbf{x}$是输入,\n",
"$\\mathbf{w}$是参数,\n",
"$c$是某个在优化过程中没有更新的指定常量。\n",
"因此我们实现了一个`FixedHiddenMLP`类,如下所示:\n"
]
},
{
"cell_type": "code",
"execution_count": 6,
"id": "9ad09596",
"metadata": {
"execution": {
"iopub.execute_input": "2023-08-18T06:57:01.364000Z",
"iopub.status.busy": "2023-08-18T06:57:01.363468Z",
"iopub.status.idle": "2023-08-18T06:57:01.369665Z",
"shell.execute_reply": "2023-08-18T06:57:01.368755Z"
},
"origin_pos": 35,
"tab": [
"pytorch"
]
},
"outputs": [],
"source": [
"class FixedHiddenMLP(nn.Module):\n",
" def __init__(self):\n",
" super().__init__()\n",
" # 不计算梯度的随机权重参数。因此其在训练期间保持不变\n",
" self.rand_weight = torch.rand((20, 20), requires_grad=False)\n",
" self.linear = nn.Linear(20, 20)\n",
"\n",
" def forward(self, X):\n",
" X = self.linear(X)\n",
" # 使用创建的常量参数以及relu和mm函数\n",
" X = F.relu(torch.mm(X, self.rand_weight) + 1)\n",
" # 复用全连接层。这相当于两个全连接层共享参数\n",
" X = self.linear(X)\n",
" # 控制流\n",
" while X.abs().sum() > 1:\n",
" X /= 2\n",
" return X.sum()"
]
},
{
"cell_type": "markdown",
"id": "06017344",
"metadata": {
"origin_pos": 38
},
"source": [
"在这个`FixedHiddenMLP`模型中,我们实现了一个隐藏层,\n",
"其权重(`self.rand_weight`)在实例化时被随机初始化,之后为常量。\n",
"这个权重不是一个模型参数,因此它永远不会被反向传播更新。\n",
"然后,神经网络将这个固定层的输出通过一个全连接层。\n",
"\n",
"注意,在返回输出之前,模型做了一些不寻常的事情:\n",
"它运行了一个while循环,在$L_1$范数大于$1$的条件下,\n",
"将输出向量除以$2$,直到它满足条件为止。\n",
"最后,模型返回了`X`中所有项的和。\n",
"注意,此操作可能不会常用于在任何实际任务中,\n",
"我们只展示如何将任意代码集成到神经网络计算的流程中。\n"
]
},
{
"cell_type": "code",
"execution_count": 7,
"id": "00ebc567",
"metadata": {
"execution": {
"iopub.execute_input": "2023-08-18T06:57:01.373508Z",
"iopub.status.busy": "2023-08-18T06:57:01.372789Z",
"iopub.status.idle": "2023-08-18T06:57:01.380049Z",
"shell.execute_reply": "2023-08-18T06:57:01.379025Z"
},
"origin_pos": 40,
"tab": [
"pytorch"
]
},
"outputs": [
{
"data": {
"text/plain": [
"tensor(0.1862, grad_fn=<SumBackward0>)"
]
},
"execution_count": 7,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"net = FixedHiddenMLP()\n",
"net(X)"
]
},
{
"cell_type": "markdown",
"id": "80b18eb2",
"metadata": {
"origin_pos": 41
},
"source": [
"我们可以[**混合搭配各种组合块的方法**]。\n",
"在下面的例子中,我们以一些想到的方法嵌套块。\n"
]
},
{
"cell_type": "code",
"execution_count": 8,
"id": "6ca3b399",
"metadata": {
"execution": {
"iopub.execute_input": "2023-08-18T06:57:01.384091Z",
"iopub.status.busy": "2023-08-18T06:57:01.383236Z",
"iopub.status.idle": "2023-08-18T06:57:01.394649Z",
"shell.execute_reply": "2023-08-18T06:57:01.393535Z"
},
"origin_pos": 43,
"tab": [
"pytorch"
]
},
"outputs": [
{
"data": {
"text/plain": [
"tensor(0.2183, grad_fn=<SumBackward0>)"
]
},
"execution_count": 8,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"class NestMLP(nn.Module):\n",
" def __init__(self):\n",
" super().__init__()\n",
" self.net = nn.Sequential(nn.Linear(20, 64), nn.ReLU(),\n",
" nn.Linear(64, 32), nn.ReLU())\n",
" self.linear = nn.Linear(32, 16)\n",
"\n",
" def forward(self, X):\n",
" return self.linear(self.net(X))\n",
"\n",
"chimera = nn.Sequential(NestMLP(), nn.Linear(16, 20), FixedHiddenMLP())\n",
"chimera(X)"
]
},
{
"cell_type": "markdown",
"id": "3b12e280",
"metadata": {
"origin_pos": 46
},
"source": [
"## 效率\n"
]
},
{
"cell_type": "markdown",
"id": "e26229d3",
"metadata": {
"origin_pos": 48,
"tab": [
"pytorch"
]
},
"source": [
"读者可能会开始担心操作效率的问题。\n",
"毕竟,我们在一个高性能的深度学习库中进行了大量的字典查找、\n",
"代码执行和许多其他的Python代码。\n",
"Python的问题[全局解释器锁](https://wiki.python.org/moin/GlobalInterpreterLock)\n",
"是众所周知的。\n",
"在深度学习环境中,我们担心速度极快的GPU可能要等到CPU运行Python代码后才能运行另一个作业。\n"
]
},
{
"cell_type": "markdown",
"id": "4fa617e6",
"metadata": {
"origin_pos": 51
},
"source": [
"## 小结\n",
"\n",
"* 一个块可以由许多层组成;一个块可以由许多块组成。\n",
"* 块可以包含代码。\n",
"* 块负责大量的内部处理,包括参数初始化和反向传播。\n",
"* 层和块的顺序连接由`Sequential`块处理。\n",
"\n",
"## 练习\n",
"\n",
"1. 如果将`MySequential`中存储块的方式更改为Python列表,会出现什么样的问题?\n",
"1. 实现一个块,它以两个块为参数,例如`net1`和`net2`,并返回前向传播中两个网络的串联输出。这也被称为平行块。\n",
"1. 假设我们想要连接同一网络的多个实例。实现一个函数,该函数生成同一个块的多个实例,并在此基础上构建更大的网络。\n"
]
},
{
"cell_type": "markdown",
"id": "c29846c8",
"metadata": {
"origin_pos": 53,
"tab": [
"pytorch"
]
},
"source": [
"[Discussions](https://discuss.d2l.ai/t/1827)\n"
]
}
],
"metadata": {
"language_info": {
"name": "python"
},
"required_libs": []
},
"nbformat": 4,
"nbformat_minor": 5
}
@@ -0,0 +1,896 @@
{
"cells": [
{
"cell_type": "markdown",
"id": "b05be39e",
"metadata": {
"origin_pos": 0
},
"source": [
"# 参数管理\n",
"\n",
"在选择了架构并设置了超参数后,我们就进入了训练阶段。\n",
"此时,我们的目标是找到使损失函数最小化的模型参数值。\n",
"经过训练后,我们将需要使用这些参数来做出未来的预测。\n",
"此外,有时我们希望提取参数,以便在其他环境中复用它们,\n",
"将模型保存下来,以便它可以在其他软件中执行,\n",
"或者为了获得科学的理解而进行检查。\n",
"\n",
"之前的介绍中,我们只依靠深度学习框架来完成训练的工作,\n",
"而忽略了操作参数的具体细节。\n",
"本节,我们将介绍以下内容:\n",
"\n",
"* 访问参数,用于调试、诊断和可视化;\n",
"* 参数初始化;\n",
"* 在不同模型组件间共享参数。\n",
"\n",
"(**我们首先看一下具有单隐藏层的多层感知机。**)\n"
]
},
{
"cell_type": "code",
"execution_count": 1,
"id": "ab7ef7a0",
"metadata": {
"execution": {
"iopub.execute_input": "2023-08-18T07:01:09.649068Z",
"iopub.status.busy": "2023-08-18T07:01:09.648305Z",
"iopub.status.idle": "2023-08-18T07:01:10.928992Z",
"shell.execute_reply": "2023-08-18T07:01:10.927959Z"
},
"origin_pos": 2,
"tab": [
"pytorch"
]
},
"outputs": [
{
"data": {
"text/plain": [
"tensor([[-0.0970],\n",
" [-0.0827]], grad_fn=<AddmmBackward0>)"
]
},
"execution_count": 1,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"import torch\n",
"from torch import nn\n",
"\n",
"net = nn.Sequential(nn.Linear(4, 8), nn.ReLU(), nn.Linear(8, 1))\n",
"X = torch.rand(size=(2, 4))\n",
"net(X)"
]
},
{
"cell_type": "markdown",
"id": "fa004a12",
"metadata": {
"origin_pos": 5
},
"source": [
"## [**参数访问**]\n",
"\n",
"我们从已有模型中访问参数。\n",
"当通过`Sequential`类定义模型时,\n",
"我们可以通过索引来访问模型的任意层。\n",
"这就像模型是一个列表一样,每层的参数都在其属性中。\n",
"如下所示,我们可以检查第二个全连接层的参数。\n"
]
},
{
"cell_type": "code",
"execution_count": 2,
"id": "5e2fff9a",
"metadata": {
"execution": {
"iopub.execute_input": "2023-08-18T07:01:10.933865Z",
"iopub.status.busy": "2023-08-18T07:01:10.933267Z",
"iopub.status.idle": "2023-08-18T07:01:10.939922Z",
"shell.execute_reply": "2023-08-18T07:01:10.938931Z"
},
"origin_pos": 7,
"tab": [
"pytorch"
]
},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"OrderedDict([('weight', tensor([[-0.0427, -0.2939, -0.1894, 0.0220, -0.1709, -0.1522, -0.0334, -0.2263]])), ('bias', tensor([0.0887]))])\n"
]
}
],
"source": [
"print(net[2].state_dict())"
]
},
{
"cell_type": "markdown",
"id": "b77c779c",
"metadata": {
"origin_pos": 9
},
"source": [
"输出的结果告诉我们一些重要的事情:\n",
"首先,这个全连接层包含两个参数,分别是该层的权重和偏置。\n",
"两者都存储为单精度浮点数(float32)。\n",
"注意,参数名称允许唯一标识每个参数,即使在包含数百个层的网络中也是如此。\n",
"\n",
"### [**目标参数**]\n",
"\n",
"注意,每个参数都表示为参数类的一个实例。\n",
"要对参数执行任何操作,首先我们需要访问底层的数值。\n",
"有几种方法可以做到这一点。有些比较简单,而另一些则比较通用。\n",
"下面的代码从第二个全连接层(即第三个神经网络层)提取偏置,\n",
"提取后返回的是一个参数类实例,并进一步访问该参数的值。\n"
]
},
{
"cell_type": "code",
"execution_count": 3,
"id": "d0682fff",
"metadata": {
"execution": {
"iopub.execute_input": "2023-08-18T07:01:10.945104Z",
"iopub.status.busy": "2023-08-18T07:01:10.944250Z",
"iopub.status.idle": "2023-08-18T07:01:10.951764Z",
"shell.execute_reply": "2023-08-18T07:01:10.950790Z"
},
"origin_pos": 11,
"tab": [
"pytorch"
]
},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"<class 'torch.nn.parameter.Parameter'>\n",
"Parameter containing:\n",
"tensor([0.0887], requires_grad=True)\n",
"tensor([0.0887])\n"
]
}
],
"source": [
"print(type(net[2].bias))\n",
"print(net[2].bias)\n",
"print(net[2].bias.data)"
]
},
{
"cell_type": "markdown",
"id": "b90565b1",
"metadata": {
"origin_pos": 14,
"tab": [
"pytorch"
]
},
"source": [
"参数是复合的对象,包含值、梯度和额外信息。\n",
"这就是我们需要显式参数值的原因。\n",
"除了值之外,我们还可以访问每个参数的梯度。\n",
"在上面这个网络中,由于我们还没有调用反向传播,所以参数的梯度处于初始状态。\n"
]
},
{
"cell_type": "code",
"execution_count": 4,
"id": "3cf4d55b",
"metadata": {
"execution": {
"iopub.execute_input": "2023-08-18T07:01:10.956378Z",
"iopub.status.busy": "2023-08-18T07:01:10.955542Z",
"iopub.status.idle": "2023-08-18T07:01:10.961810Z",
"shell.execute_reply": "2023-08-18T07:01:10.960767Z"
},
"origin_pos": 16,
"tab": [
"pytorch"
]
},
"outputs": [
{
"data": {
"text/plain": [
"True"
]
},
"execution_count": 4,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"net[2].weight.grad == None"
]
},
{
"cell_type": "markdown",
"id": "01e647c1",
"metadata": {
"origin_pos": 17
},
"source": [
"### [**一次性访问所有参数**]\n",
"\n",
"当我们需要对所有参数执行操作时,逐个访问它们可能会很麻烦。\n",
"当我们处理更复杂的块(例如,嵌套块)时,情况可能会变得特别复杂,\n",
"因为我们需要递归整个树来提取每个子块的参数。\n",
"下面,我们将通过演示来比较访问第一个全连接层的参数和访问所有层。\n"
]
},
{
"cell_type": "code",
"execution_count": 5,
"id": "916939ce",
"metadata": {
"execution": {
"iopub.execute_input": "2023-08-18T07:01:10.966725Z",
"iopub.status.busy": "2023-08-18T07:01:10.965969Z",
"iopub.status.idle": "2023-08-18T07:01:10.972600Z",
"shell.execute_reply": "2023-08-18T07:01:10.971655Z"
},
"origin_pos": 19,
"tab": [
"pytorch"
]
},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"('weight', torch.Size([8, 4])) ('bias', torch.Size([8]))\n",
"('0.weight', torch.Size([8, 4])) ('0.bias', torch.Size([8])) ('2.weight', torch.Size([1, 8])) ('2.bias', torch.Size([1]))\n"
]
}
],
"source": [
"print(*[(name, param.shape) for name, param in net[0].named_parameters()])\n",
"print(*[(name, param.shape) for name, param in net.named_parameters()])"
]
},
{
"cell_type": "markdown",
"id": "c9cc1e2f",
"metadata": {
"origin_pos": 21
},
"source": [
"这为我们提供了另一种访问网络参数的方式,如下所示。\n"
]
},
{
"cell_type": "code",
"execution_count": 6,
"id": "116207ef",
"metadata": {
"execution": {
"iopub.execute_input": "2023-08-18T07:01:10.977269Z",
"iopub.status.busy": "2023-08-18T07:01:10.976623Z",
"iopub.status.idle": "2023-08-18T07:01:10.983222Z",
"shell.execute_reply": "2023-08-18T07:01:10.982309Z"
},
"origin_pos": 23,
"tab": [
"pytorch"
]
},
"outputs": [
{
"data": {
"text/plain": [
"tensor([0.0887])"
]
},
"execution_count": 6,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"net.state_dict()['2.bias'].data"
]
},
{
"cell_type": "markdown",
"id": "f2ae2721",
"metadata": {
"origin_pos": 26
},
"source": [
"### [**从嵌套块收集参数**]\n",
"\n",
"让我们看看,如果我们将多个块相互嵌套,参数命名约定是如何工作的。\n",
"我们首先定义一个生成块的函数(可以说是“块工厂”),然后将这些块组合到更大的块中。\n"
]
},
{
"cell_type": "code",
"execution_count": 7,
"id": "712e31fd",
"metadata": {
"execution": {
"iopub.execute_input": "2023-08-18T07:01:10.988088Z",
"iopub.status.busy": "2023-08-18T07:01:10.987352Z",
"iopub.status.idle": "2023-08-18T07:01:10.998245Z",
"shell.execute_reply": "2023-08-18T07:01:10.997197Z"
},
"origin_pos": 28,
"tab": [
"pytorch"
]
},
"outputs": [
{
"data": {
"text/plain": [
"tensor([[0.2596],\n",
" [0.2596]], grad_fn=<AddmmBackward0>)"
]
},
"execution_count": 7,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"def block1():\n",
" return nn.Sequential(nn.Linear(4, 8), nn.ReLU(),\n",
" nn.Linear(8, 4), nn.ReLU())\n",
"\n",
"def block2():\n",
" net = nn.Sequential()\n",
" for i in range(4):\n",
" # 在这里嵌套\n",
" net.add_module(f'block {i}', block1())\n",
" return net\n",
"\n",
"rgnet = nn.Sequential(block2(), nn.Linear(4, 1))\n",
"rgnet(X)"
]
},
{
"cell_type": "markdown",
"id": "ac9958fb",
"metadata": {
"origin_pos": 31
},
"source": [
"[**设计了网络后,我们看看它是如何工作的。**]\n"
]
},
{
"cell_type": "code",
"execution_count": 8,
"id": "c7d7717d",
"metadata": {
"execution": {
"iopub.execute_input": "2023-08-18T07:01:11.002889Z",
"iopub.status.busy": "2023-08-18T07:01:11.002264Z",
"iopub.status.idle": "2023-08-18T07:01:11.007643Z",
"shell.execute_reply": "2023-08-18T07:01:11.006464Z"
},
"origin_pos": 33,
"tab": [
"pytorch"
]
},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"Sequential(\n",
" (0): Sequential(\n",
" (block 0): Sequential(\n",
" (0): Linear(in_features=4, out_features=8, bias=True)\n",
" (1): ReLU()\n",
" (2): Linear(in_features=8, out_features=4, bias=True)\n",
" (3): ReLU()\n",
" )\n",
" (block 1): Sequential(\n",
" (0): Linear(in_features=4, out_features=8, bias=True)\n",
" (1): ReLU()\n",
" (2): Linear(in_features=8, out_features=4, bias=True)\n",
" (3): ReLU()\n",
" )\n",
" (block 2): Sequential(\n",
" (0): Linear(in_features=4, out_features=8, bias=True)\n",
" (1): ReLU()\n",
" (2): Linear(in_features=8, out_features=4, bias=True)\n",
" (3): ReLU()\n",
" )\n",
" (block 3): Sequential(\n",
" (0): Linear(in_features=4, out_features=8, bias=True)\n",
" (1): ReLU()\n",
" (2): Linear(in_features=8, out_features=4, bias=True)\n",
" (3): ReLU()\n",
" )\n",
" )\n",
" (1): Linear(in_features=4, out_features=1, bias=True)\n",
")\n"
]
}
],
"source": [
"print(rgnet)"
]
},
{
"cell_type": "markdown",
"id": "1c49f699",
"metadata": {
"origin_pos": 35
},
"source": [
"因为层是分层嵌套的,所以我们也可以像通过嵌套列表索引一样访问它们。\n",
"下面,我们访问第一个主要的块中、第二个子块的第一层的偏置项。\n"
]
},
{
"cell_type": "code",
"execution_count": 9,
"id": "939ba4d3",
"metadata": {
"execution": {
"iopub.execute_input": "2023-08-18T07:01:11.012522Z",
"iopub.status.busy": "2023-08-18T07:01:11.011839Z",
"iopub.status.idle": "2023-08-18T07:01:11.018508Z",
"shell.execute_reply": "2023-08-18T07:01:11.017590Z"
},
"origin_pos": 37,
"tab": [
"pytorch"
]
},
"outputs": [
{
"data": {
"text/plain": [
"tensor([ 0.1999, -0.4073, -0.1200, -0.2033, -0.1573, 0.3546, -0.2141, -0.2483])"
]
},
"execution_count": 9,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"rgnet[0][1][0].bias.data"
]
},
{
"cell_type": "markdown",
"id": "0383b6a9",
"metadata": {
"origin_pos": 40
},
"source": [
"## 参数初始化\n",
"\n",
"知道了如何访问参数后,现在我们看看如何正确地初始化参数。\n",
"我们在 :numref:`sec_numerical_stability`中讨论了良好初始化的必要性。\n",
"深度学习框架提供默认随机初始化,\n",
"也允许我们创建自定义初始化方法,\n",
"满足我们通过其他规则实现初始化权重。\n"
]
},
{
"cell_type": "markdown",
"id": "0418f044",
"metadata": {
"origin_pos": 42,
"tab": [
"pytorch"
]
},
"source": [
"默认情况下,PyTorch会根据一个范围均匀地初始化权重和偏置矩阵,\n",
"这个范围是根据输入和输出维度计算出的。\n",
"PyTorch的`nn.init`模块提供了多种预置初始化方法。\n"
]
},
{
"cell_type": "markdown",
"id": "0b0b932a",
"metadata": {
"origin_pos": 45
},
"source": [
"### [**内置初始化**]\n",
"\n",
"让我们首先调用内置的初始化器。\n",
"下面的代码将所有权重参数初始化为标准差为0.01的高斯随机变量,\n",
"且将偏置参数设置为0。\n"
]
},
{
"cell_type": "code",
"execution_count": 10,
"id": "2f00d5e7",
"metadata": {
"execution": {
"iopub.execute_input": "2023-08-18T07:01:11.023955Z",
"iopub.status.busy": "2023-08-18T07:01:11.023046Z",
"iopub.status.idle": "2023-08-18T07:01:11.033287Z",
"shell.execute_reply": "2023-08-18T07:01:11.032096Z"
},
"origin_pos": 47,
"tab": [
"pytorch"
]
},
"outputs": [
{
"data": {
"text/plain": [
"(tensor([-0.0214, -0.0015, -0.0100, -0.0058]), tensor(0.))"
]
},
"execution_count": 10,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"def init_normal(m):\n",
" if type(m) == nn.Linear:\n",
" nn.init.normal_(m.weight, mean=0, std=0.01)\n",
" nn.init.zeros_(m.bias)\n",
"net.apply(init_normal)\n",
"net[0].weight.data[0], net[0].bias.data[0]"
]
},
{
"cell_type": "markdown",
"id": "753e540b",
"metadata": {
"origin_pos": 50
},
"source": [
"我们还可以将所有参数初始化为给定的常数,比如初始化为1。\n"
]
},
{
"cell_type": "code",
"execution_count": 11,
"id": "49ee306c",
"metadata": {
"execution": {
"iopub.execute_input": "2023-08-18T07:01:11.038321Z",
"iopub.status.busy": "2023-08-18T07:01:11.037607Z",
"iopub.status.idle": "2023-08-18T07:01:11.049009Z",
"shell.execute_reply": "2023-08-18T07:01:11.047793Z"
},
"origin_pos": 52,
"tab": [
"pytorch"
]
},
"outputs": [
{
"data": {
"text/plain": [
"(tensor([1., 1., 1., 1.]), tensor(0.))"
]
},
"execution_count": 11,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"def init_constant(m):\n",
" if type(m) == nn.Linear:\n",
" nn.init.constant_(m.weight, 1)\n",
" nn.init.zeros_(m.bias)\n",
"net.apply(init_constant)\n",
"net[0].weight.data[0], net[0].bias.data[0]"
]
},
{
"cell_type": "markdown",
"id": "e086279d",
"metadata": {
"origin_pos": 55
},
"source": [
"我们还可以[**对某些块应用不同的初始化方法**]。\n",
"例如,下面我们使用Xavier初始化方法初始化第一个神经网络层,\n",
"然后将第三个神经网络层初始化为常量值42。\n"
]
},
{
"cell_type": "code",
"execution_count": 12,
"id": "1a90ffaa",
"metadata": {
"execution": {
"iopub.execute_input": "2023-08-18T07:01:11.054335Z",
"iopub.status.busy": "2023-08-18T07:01:11.053550Z",
"iopub.status.idle": "2023-08-18T07:01:11.063215Z",
"shell.execute_reply": "2023-08-18T07:01:11.062244Z"
},
"origin_pos": 57,
"tab": [
"pytorch"
]
},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"tensor([ 0.5236, 0.0516, -0.3236, 0.3794])\n",
"tensor([[42., 42., 42., 42., 42., 42., 42., 42.]])\n"
]
}
],
"source": [
"def init_xavier(m):\n",
" if type(m) == nn.Linear:\n",
" nn.init.xavier_uniform_(m.weight)\n",
"def init_42(m):\n",
" if type(m) == nn.Linear:\n",
" nn.init.constant_(m.weight, 42)\n",
"\n",
"net[0].apply(init_xavier)\n",
"net[2].apply(init_42)\n",
"print(net[0].weight.data[0])\n",
"print(net[2].weight.data)"
]
},
{
"cell_type": "markdown",
"id": "581dcade",
"metadata": {
"origin_pos": 60
},
"source": [
"### [**自定义初始化**]\n",
"\n",
"有时,深度学习框架没有提供我们需要的初始化方法。\n",
"在下面的例子中,我们使用以下的分布为任意权重参数$w$定义初始化方法:\n",
"\n",
"$$\n",
"\\begin{aligned}\n",
" w \\sim \\begin{cases}\n",
" U(5, 10) & \\text{ 可能性 } \\frac{1}{4} \\\\\n",
" 0 & \\text{ 可能性 } \\frac{1}{2} \\\\\n",
" U(-10, -5) & \\text{ 可能性 } \\frac{1}{4}\n",
" \\end{cases}\n",
"\\end{aligned}\n",
"$$\n"
]
},
{
"cell_type": "markdown",
"id": "12502b7c",
"metadata": {
"origin_pos": 62,
"tab": [
"pytorch"
]
},
"source": [
"同样,我们实现了一个`my_init`函数来应用到`net`。\n"
]
},
{
"cell_type": "code",
"execution_count": 13,
"id": "9166f6e3",
"metadata": {
"execution": {
"iopub.execute_input": "2023-08-18T07:01:11.068164Z",
"iopub.status.busy": "2023-08-18T07:01:11.067460Z",
"iopub.status.idle": "2023-08-18T07:01:11.079228Z",
"shell.execute_reply": "2023-08-18T07:01:11.078069Z"
},
"origin_pos": 66,
"tab": [
"pytorch"
]
},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"Init weight torch.Size([8, 4])\n",
"Init weight torch.Size([1, 8])\n"
]
},
{
"data": {
"text/plain": [
"tensor([[5.4079, 9.3334, 5.0616, 8.3095],\n",
" [0.0000, 7.2788, -0.0000, -0.0000]], grad_fn=<SliceBackward0>)"
]
},
"execution_count": 13,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"def my_init(m):\n",
" if type(m) == nn.Linear:\n",
" print(\"Init\", *[(name, param.shape)\n",
" for name, param in m.named_parameters()][0])\n",
" nn.init.uniform_(m.weight, -10, 10)\n",
" m.weight.data *= m.weight.data.abs() >= 5\n",
"\n",
"net.apply(my_init)\n",
"net[0].weight[:2]"
]
},
{
"cell_type": "markdown",
"id": "030a52c5",
"metadata": {
"origin_pos": 69
},
"source": [
"注意,我们始终可以直接设置参数。\n"
]
},
{
"cell_type": "code",
"execution_count": 14,
"id": "5b9af1f8",
"metadata": {
"execution": {
"iopub.execute_input": "2023-08-18T07:01:11.084158Z",
"iopub.status.busy": "2023-08-18T07:01:11.083416Z",
"iopub.status.idle": "2023-08-18T07:01:11.092672Z",
"shell.execute_reply": "2023-08-18T07:01:11.091537Z"
},
"origin_pos": 71,
"tab": [
"pytorch"
]
},
"outputs": [
{
"data": {
"text/plain": [
"tensor([42.0000, 10.3334, 6.0616, 9.3095])"
]
},
"execution_count": 14,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"net[0].weight.data[:] += 1\n",
"net[0].weight.data[0, 0] = 42\n",
"net[0].weight.data[0]"
]
},
{
"cell_type": "markdown",
"id": "a4144ff7",
"metadata": {
"origin_pos": 75
},
"source": [
"## [**参数绑定**]\n",
"\n",
"有时我们希望在多个层间共享参数:\n",
"我们可以定义一个稠密层,然后使用它的参数来设置另一个层的参数。\n"
]
},
{
"cell_type": "code",
"execution_count": 15,
"id": "69660fa7",
"metadata": {
"execution": {
"iopub.execute_input": "2023-08-18T07:01:11.097767Z",
"iopub.status.busy": "2023-08-18T07:01:11.096948Z",
"iopub.status.idle": "2023-08-18T07:01:11.108904Z",
"shell.execute_reply": "2023-08-18T07:01:11.107763Z"
},
"origin_pos": 77,
"tab": [
"pytorch"
]
},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"tensor([True, True, True, True, True, True, True, True])\n",
"tensor([True, True, True, True, True, True, True, True])\n"
]
}
],
"source": [
"# 我们需要给共享层一个名称,以便可以引用它的参数\n",
"shared = nn.Linear(8, 8)\n",
"net = nn.Sequential(nn.Linear(4, 8), nn.ReLU(),\n",
" shared, nn.ReLU(),\n",
" shared, nn.ReLU(),\n",
" nn.Linear(8, 1))\n",
"net(X)\n",
"# 检查参数是否相同\n",
"print(net[2].weight.data[0] == net[4].weight.data[0])\n",
"net[2].weight.data[0, 0] = 100\n",
"# 确保它们实际上是同一个对象,而不只是有相同的值\n",
"print(net[2].weight.data[0] == net[4].weight.data[0])"
]
},
{
"cell_type": "markdown",
"id": "81dc2c3c",
"metadata": {
"origin_pos": 81,
"tab": [
"pytorch"
]
},
"source": [
"这个例子表明第三个和第五个神经网络层的参数是绑定的。\n",
"它们不仅值相等,而且由相同的张量表示。\n",
"因此,如果我们改变其中一个参数,另一个参数也会改变。\n",
"这里有一个问题:当参数绑定时,梯度会发生什么情况?\n",
"答案是由于模型参数包含梯度,因此在反向传播期间第二个隐藏层\n",
"(即第三个神经网络层)和第三个隐藏层(即第五个神经网络层)的梯度会加在一起。\n"
]
},
{
"cell_type": "markdown",
"id": "ef8e6259",
"metadata": {
"origin_pos": 82
},
"source": [
"## 小结\n",
"\n",
"* 我们有几种方法可以访问、初始化和绑定模型参数。\n",
"* 我们可以使用自定义初始化方法。\n",
"\n",
"## 练习\n",
"\n",
"1. 使用 :numref:`sec_model_construction` 中定义的`FancyMLP`模型,访问各个层的参数。\n",
"1. 查看初始化模块文档以了解不同的初始化方法。\n",
"1. 构建包含共享参数层的多层感知机并对其进行训练。在训练过程中,观察模型各层的参数和梯度。\n",
"1. 为什么共享参数是个好主意?\n"
]
},
{
"cell_type": "markdown",
"id": "ead65cf9",
"metadata": {
"origin_pos": 84,
"tab": [
"pytorch"
]
},
"source": [
"[Discussions](https://discuss.d2l.ai/t/1829)\n"
]
}
],
"metadata": {
"language_info": {
"name": "python"
},
"required_libs": []
},
"nbformat": 4,
"nbformat_minor": 5
}
@@ -0,0 +1,408 @@
{
"cells": [
{
"cell_type": "markdown",
"id": "bec47e64",
"metadata": {
"origin_pos": 0
},
"source": [
"# 读写文件\n",
"\n",
"到目前为止,我们讨论了如何处理数据,\n",
"以及如何构建、训练和测试深度学习模型。\n",
"然而,有时我们希望保存训练的模型,\n",
"以备将来在各种环境中使用(比如在部署中进行预测)。\n",
"此外,当运行一个耗时较长的训练过程时,\n",
"最佳的做法是定期保存中间结果,\n",
"以确保在服务器电源被不小心断掉时,我们不会损失几天的计算结果。\n",
"因此,现在是时候学习如何加载和存储权重向量和整个模型了。\n",
"\n",
"## (**加载和保存张量**)\n",
"\n",
"对于单个张量,我们可以直接调用`load`和`save`函数分别读写它们。\n",
"这两个函数都要求我们提供一个名称,`save`要求将要保存的变量作为输入。\n"
]
},
{
"cell_type": "code",
"execution_count": 1,
"id": "9b319fd3",
"metadata": {
"execution": {
"iopub.execute_input": "2023-08-18T06:56:42.668559Z",
"iopub.status.busy": "2023-08-18T06:56:42.667248Z",
"iopub.status.idle": "2023-08-18T06:56:43.728764Z",
"shell.execute_reply": "2023-08-18T06:56:43.727885Z"
},
"origin_pos": 2,
"tab": [
"pytorch"
]
},
"outputs": [],
"source": [
"import torch\n",
"from torch import nn\n",
"from torch.nn import functional as F\n",
"\n",
"x = torch.arange(4)\n",
"torch.save(x, 'x-file')"
]
},
{
"cell_type": "markdown",
"id": "e4f44ac7",
"metadata": {
"origin_pos": 5
},
"source": [
"我们现在可以将存储在文件中的数据读回内存。\n"
]
},
{
"cell_type": "code",
"execution_count": 2,
"id": "1ab53461",
"metadata": {
"execution": {
"iopub.execute_input": "2023-08-18T06:56:43.733002Z",
"iopub.status.busy": "2023-08-18T06:56:43.732347Z",
"iopub.status.idle": "2023-08-18T06:56:43.741208Z",
"shell.execute_reply": "2023-08-18T06:56:43.740416Z"
},
"origin_pos": 7,
"tab": [
"pytorch"
]
},
"outputs": [
{
"data": {
"text/plain": [
"tensor([0, 1, 2, 3])"
]
},
"execution_count": 2,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"x2 = torch.load('x-file')\n",
"x2"
]
},
{
"cell_type": "markdown",
"id": "44d4a111",
"metadata": {
"origin_pos": 10
},
"source": [
"我们可以[**存储一个张量列表,然后把它们读回内存。**]\n"
]
},
{
"cell_type": "code",
"execution_count": 3,
"id": "81027fe1",
"metadata": {
"execution": {
"iopub.execute_input": "2023-08-18T06:56:43.744676Z",
"iopub.status.busy": "2023-08-18T06:56:43.744140Z",
"iopub.status.idle": "2023-08-18T06:56:43.751376Z",
"shell.execute_reply": "2023-08-18T06:56:43.750630Z"
},
"origin_pos": 12,
"tab": [
"pytorch"
]
},
"outputs": [
{
"data": {
"text/plain": [
"(tensor([0, 1, 2, 3]), tensor([0., 0., 0., 0.]))"
]
},
"execution_count": 3,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"y = torch.zeros(4)\n",
"torch.save([x, y],'x-files')\n",
"x2, y2 = torch.load('x-files')\n",
"(x2, y2)"
]
},
{
"cell_type": "markdown",
"id": "b060dd48",
"metadata": {
"origin_pos": 15
},
"source": [
"我们甚至可以(**写入或读取从字符串映射到张量的字典**)。\n",
"当我们要读取或写入模型中的所有权重时,这很方便。\n"
]
},
{
"cell_type": "code",
"execution_count": 4,
"id": "fde1cb33",
"metadata": {
"execution": {
"iopub.execute_input": "2023-08-18T06:56:43.754777Z",
"iopub.status.busy": "2023-08-18T06:56:43.754313Z",
"iopub.status.idle": "2023-08-18T06:56:43.761150Z",
"shell.execute_reply": "2023-08-18T06:56:43.760369Z"
},
"origin_pos": 17,
"tab": [
"pytorch"
]
},
"outputs": [
{
"data": {
"text/plain": [
"{'x': tensor([0, 1, 2, 3]), 'y': tensor([0., 0., 0., 0.])}"
]
},
"execution_count": 4,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"mydict = {'x': x, 'y': y}\n",
"torch.save(mydict, 'mydict')\n",
"mydict2 = torch.load('mydict')\n",
"mydict2"
]
},
{
"cell_type": "markdown",
"id": "afa857bf",
"metadata": {
"origin_pos": 20
},
"source": [
"## [**加载和保存模型参数**]\n",
"\n",
"保存单个权重向量(或其他张量)确实有用,\n",
"但是如果我们想保存整个模型,并在以后加载它们,\n",
"单独保存每个向量则会变得很麻烦。\n",
"毕竟,我们可能有数百个参数散布在各处。\n",
"因此,深度学习框架提供了内置函数来保存和加载整个网络。\n",
"需要注意的一个重要细节是,这将保存模型的参数而不是保存整个模型。\n",
"例如,如果我们有一个3层多层感知机,我们需要单独指定架构。\n",
"因为模型本身可以包含任意代码,所以模型本身难以序列化。\n",
"因此,为了恢复模型,我们需要用代码生成架构,\n",
"然后从磁盘加载参数。\n",
"让我们从熟悉的多层感知机开始尝试一下。\n"
]
},
{
"cell_type": "code",
"execution_count": 5,
"id": "2672b5c2",
"metadata": {
"execution": {
"iopub.execute_input": "2023-08-18T06:56:43.764609Z",
"iopub.status.busy": "2023-08-18T06:56:43.764090Z",
"iopub.status.idle": "2023-08-18T06:56:43.773070Z",
"shell.execute_reply": "2023-08-18T06:56:43.772277Z"
},
"origin_pos": 22,
"tab": [
"pytorch"
]
},
"outputs": [],
"source": [
"class MLP(nn.Module):\n",
" def __init__(self):\n",
" super().__init__()\n",
" self.hidden = nn.Linear(20, 256)\n",
" self.output = nn.Linear(256, 10)\n",
"\n",
" def forward(self, x):\n",
" return self.output(F.relu(self.hidden(x)))\n",
"\n",
"net = MLP()\n",
"X = torch.randn(size=(2, 20))\n",
"Y = net(X)"
]
},
{
"cell_type": "markdown",
"id": "697ceed0",
"metadata": {
"origin_pos": 25
},
"source": [
"接下来,我们[**将模型的参数存储在一个叫做“mlp.params”的文件中。**]\n"
]
},
{
"cell_type": "code",
"execution_count": 6,
"id": "a53c1315",
"metadata": {
"execution": {
"iopub.execute_input": "2023-08-18T06:56:43.776452Z",
"iopub.status.busy": "2023-08-18T06:56:43.775942Z",
"iopub.status.idle": "2023-08-18T06:56:43.780387Z",
"shell.execute_reply": "2023-08-18T06:56:43.779636Z"
},
"origin_pos": 27,
"tab": [
"pytorch"
]
},
"outputs": [],
"source": [
"torch.save(net.state_dict(), 'mlp.params')"
]
},
{
"cell_type": "markdown",
"id": "b6df754a",
"metadata": {
"origin_pos": 30
},
"source": [
"为了恢复模型,我们[**实例化了原始多层感知机模型的一个备份。**]\n",
"这里我们不需要随机初始化模型参数,而是(**直接读取文件中存储的参数。**)\n"
]
},
{
"cell_type": "code",
"execution_count": 7,
"id": "da5e1b3f",
"metadata": {
"execution": {
"iopub.execute_input": "2023-08-18T06:56:43.783850Z",
"iopub.status.busy": "2023-08-18T06:56:43.783240Z",
"iopub.status.idle": "2023-08-18T06:56:43.789905Z",
"shell.execute_reply": "2023-08-18T06:56:43.789164Z"
},
"origin_pos": 32,
"tab": [
"pytorch"
]
},
"outputs": [
{
"data": {
"text/plain": [
"MLP(\n",
" (hidden): Linear(in_features=20, out_features=256, bias=True)\n",
" (output): Linear(in_features=256, out_features=10, bias=True)\n",
")"
]
},
"execution_count": 7,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"clone = MLP()\n",
"clone.load_state_dict(torch.load('mlp.params'))\n",
"clone.eval()"
]
},
{
"cell_type": "markdown",
"id": "65076662",
"metadata": {
"origin_pos": 35
},
"source": [
"由于两个实例具有相同的模型参数,在输入相同的`X`时,\n",
"两个实例的计算结果应该相同。\n",
"让我们来验证一下。\n"
]
},
{
"cell_type": "code",
"execution_count": 8,
"id": "a25ba1f1",
"metadata": {
"execution": {
"iopub.execute_input": "2023-08-18T06:56:43.793400Z",
"iopub.status.busy": "2023-08-18T06:56:43.792788Z",
"iopub.status.idle": "2023-08-18T06:56:43.798329Z",
"shell.execute_reply": "2023-08-18T06:56:43.797576Z"
},
"origin_pos": 37,
"tab": [
"pytorch"
]
},
"outputs": [
{
"data": {
"text/plain": [
"tensor([[True, True, True, True, True, True, True, True, True, True],\n",
" [True, True, True, True, True, True, True, True, True, True]])"
]
},
"execution_count": 8,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"Y_clone = clone(X)\n",
"Y_clone == Y"
]
},
{
"cell_type": "markdown",
"id": "7a65b1e2",
"metadata": {
"origin_pos": 39
},
"source": [
"## 小结\n",
"\n",
"* `save`和`load`函数可用于张量对象的文件读写。\n",
"* 我们可以通过参数字典保存和加载网络的全部参数。\n",
"* 保存架构必须在代码中完成,而不是在参数中完成。\n",
"\n",
"## 练习\n",
"\n",
"1. 即使不需要将经过训练的模型部署到不同的设备上,存储模型参数还有什么实际的好处?\n",
"1. 假设我们只想复用网络的一部分,以将其合并到不同的网络架构中。比如想在一个新的网络中使用之前网络的前两层,该怎么做?\n",
"1. 如何同时保存网络架构和参数?需要对架构加上什么限制?\n"
]
},
{
"cell_type": "markdown",
"id": "d803f301",
"metadata": {
"origin_pos": 41,
"tab": [
"pytorch"
]
},
"source": [
"[Discussions](https://discuss.d2l.ai/t/1839)\n"
]
}
],
"metadata": {
"language_info": {
"name": "python"
},
"required_libs": []
},
"nbformat": 4,
"nbformat_minor": 5
}
@@ -0,0 +1,768 @@
{
"cells": [
{
"cell_type": "markdown",
"id": "618fd23a",
"metadata": {
"origin_pos": 0
},
"source": [
"# GPU\n",
":label:`sec_use_gpu`\n",
"\n",
"在 :numref:`tab_intro_decade`中,\n",
"我们回顾了过去20年计算能力的快速增长。\n",
"简而言之,自2000年以来,GPU性能每十年增长1000倍。\n",
"\n",
"本节,我们将讨论如何利用这种计算性能进行研究。\n",
"首先是如何使用单个GPU,然后是如何使用多个GPU和多个服务器(具有多个GPU)。\n",
"\n",
"我们先看看如何使用单个NVIDIA GPU进行计算。\n",
"首先,确保至少安装了一个NVIDIA GPU。\n",
"然后,下载[NVIDIA驱动和CUDA](https://developer.nvidia.com/cuda-downloads)\n",
"并按照提示设置适当的路径。\n",
"当这些准备工作完成,就可以使用`nvidia-smi`命令来(**查看显卡信息。**)\n"
]
},
{
"cell_type": "code",
"execution_count": 1,
"id": "369d9baa",
"metadata": {
"execution": {
"iopub.execute_input": "2023-08-18T06:58:06.499888Z",
"iopub.status.busy": "2023-08-18T06:58:06.499324Z",
"iopub.status.idle": "2023-08-18T06:58:06.859541Z",
"shell.execute_reply": "2023-08-18T06:58:06.858210Z"
},
"origin_pos": 1,
"tab": [
"pytorch"
]
},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"Fri Aug 18 06:58:06 2023 \r\n",
"+-----------------------------------------------------------------------------+\r\n",
"| NVIDIA-SMI 470.161.03 Driver Version: 470.161.03 CUDA Version: 11.7 |\r\n",
"|-------------------------------+----------------------+----------------------+\r\n",
"| GPU Name Persistence-M| Bus-Id Disp.A | Volatile Uncorr. ECC |\r\n",
"| Fan Temp Perf Pwr:Usage/Cap| Memory-Usage | GPU-Util Compute M. |\r\n",
"| | | MIG M. |\r\n",
"|===============================+======================+======================|\r\n",
"| 0 Tesla V100-SXM2... Off | 00000000:00:1B.0 Off | 0 |\r\n",
"| N/A 41C P0 42W / 300W | 0MiB / 16160MiB | 0% Default |\r\n",
"| | | N/A |\r\n",
"+-------------------------------+----------------------+----------------------+\r\n"
]
},
{
"name": "stdout",
"output_type": "stream",
"text": [
"| 1 Tesla V100-SXM2... Off | 00000000:00:1C.0 Off | 0 |\r\n",
"| N/A 44C P0 113W / 300W | 1456MiB / 16160MiB | 53% Default |\r\n",
"| | | N/A |\r\n",
"+-------------------------------+----------------------+----------------------+\r\n",
"| 2 Tesla V100-SXM2... Off | 00000000:00:1D.0 Off | 0 |\r\n",
"| N/A 43C P0 120W / 300W | 1358MiB / 16160MiB | 55% Default |\r\n",
"| | | N/A |\r\n",
"+-------------------------------+----------------------+----------------------+\r\n",
"| 3 Tesla V100-SXM2... Off | 00000000:00:1E.0 Off | 0 |\r\n",
"| N/A 42C P0 47W / 300W | 0MiB / 16160MiB | 0% Default |\r\n",
"| | | N/A |\r\n",
"+-------------------------------+----------------------+----------------------+\r\n",
" \r\n",
"+-----------------------------------------------------------------------------+\r\n",
"| Processes: |\r\n",
"| GPU GI CI PID Type Process name GPU Memory |\r\n",
"| ID ID Usage |\r\n",
"|=============================================================================|\r\n",
"+-----------------------------------------------------------------------------+\r\n"
]
}
],
"source": [
"!nvidia-smi"
]
},
{
"cell_type": "markdown",
"id": "23e1982b",
"metadata": {
"origin_pos": 3,
"tab": [
"pytorch"
]
},
"source": [
"在PyTorch中,每个数组都有一个设备(device),\n",
"我们通常将其称为环境(context)。\n",
"默认情况下,所有变量和相关的计算都分配给CPU。\n",
"有时环境可能是GPU。\n",
"当我们跨多个服务器部署作业时,事情会变得更加棘手。\n",
"通过智能地将数组分配给环境,\n",
"我们可以最大限度地减少在设备之间传输数据的时间。\n",
"例如,当在带有GPU的服务器上训练神经网络时,\n",
"我们通常希望模型的参数在GPU上。\n"
]
},
{
"cell_type": "markdown",
"id": "aeacf63c",
"metadata": {
"origin_pos": 5
},
"source": [
"要运行此部分中的程序,至少需要两个GPU。\n",
"注意,对大多数桌面计算机来说,这可能是奢侈的,但在云中很容易获得。\n",
"例如可以使用AWS EC2的多GPU实例。\n",
"本书的其他章节大都不需要多个GPU,\n",
"而本节只是为了展示数据如何在不同的设备之间传递。\n",
"\n",
"## [**计算设备**]\n",
"\n",
"我们可以指定用于存储和计算的设备,如CPU和GPU。\n",
"默认情况下,张量是在内存中创建的,然后使用CPU计算它。\n"
]
},
{
"cell_type": "markdown",
"id": "872e46f0",
"metadata": {
"origin_pos": 7,
"tab": [
"pytorch"
]
},
"source": [
"在PyTorch中,CPU和GPU可以用`torch.device('cpu')`\n",
"和`torch.device('cuda')`表示。\n",
"应该注意的是,`cpu`设备意味着所有物理CPU和内存,\n",
"这意味着PyTorch的计算将尝试使用所有CPU核心。\n",
"然而,`gpu`设备只代表一个卡和相应的显存。\n",
"如果有多个GPU,我们使用`torch.device(f'cuda:{i}')`\n",
"来表示第$i$块GPU$i$从0开始)。\n",
"另外,`cuda:0`和`cuda`是等价的。\n"
]
},
{
"cell_type": "code",
"execution_count": 2,
"id": "9f69ad46",
"metadata": {
"execution": {
"iopub.execute_input": "2023-08-18T06:58:06.865430Z",
"iopub.status.busy": "2023-08-18T06:58:06.864979Z",
"iopub.status.idle": "2023-08-18T06:58:07.970615Z",
"shell.execute_reply": "2023-08-18T06:58:07.969801Z"
},
"origin_pos": 10,
"tab": [
"pytorch"
]
},
"outputs": [
{
"data": {
"text/plain": [
"(device(type='cpu'), device(type='cuda'), device(type='cuda', index=1))"
]
},
"execution_count": 2,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"import torch\n",
"from torch import nn\n",
"\n",
"torch.device('cpu'), torch.device('cuda'), torch.device('cuda:1')"
]
},
{
"cell_type": "markdown",
"id": "248784cc",
"metadata": {
"origin_pos": 13
},
"source": [
"我们可以(**查询可用gpu的数量。**)\n"
]
},
{
"cell_type": "code",
"execution_count": 3,
"id": "c29151b0",
"metadata": {
"execution": {
"iopub.execute_input": "2023-08-18T06:58:07.974568Z",
"iopub.status.busy": "2023-08-18T06:58:07.973917Z",
"iopub.status.idle": "2023-08-18T06:58:07.979097Z",
"shell.execute_reply": "2023-08-18T06:58:07.978337Z"
},
"origin_pos": 15,
"tab": [
"pytorch"
]
},
"outputs": [
{
"data": {
"text/plain": [
"2"
]
},
"execution_count": 3,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"torch.cuda.device_count()"
]
},
{
"cell_type": "markdown",
"id": "6e1bc4a6",
"metadata": {
"origin_pos": 18
},
"source": [
"现在我们定义了两个方便的函数,\n",
"[**这两个函数允许我们在不存在所需所有GPU的情况下运行代码。**]\n"
]
},
{
"cell_type": "code",
"execution_count": 4,
"id": "cda0ab76",
"metadata": {
"execution": {
"iopub.execute_input": "2023-08-18T06:58:07.983261Z",
"iopub.status.busy": "2023-08-18T06:58:07.982604Z",
"iopub.status.idle": "2023-08-18T06:58:07.990309Z",
"shell.execute_reply": "2023-08-18T06:58:07.989541Z"
},
"origin_pos": 20,
"tab": [
"pytorch"
]
},
"outputs": [
{
"data": {
"text/plain": [
"(device(type='cuda', index=0),\n",
" device(type='cpu'),\n",
" [device(type='cuda', index=0), device(type='cuda', index=1)])"
]
},
"execution_count": 4,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"def try_gpu(i=0): #@save\n",
" \"\"\"如果存在,则返回gpu(i),否则返回cpu()\"\"\"\n",
" if torch.cuda.device_count() >= i + 1:\n",
" return torch.device(f'cuda:{i}')\n",
" return torch.device('cpu')\n",
"\n",
"def try_all_gpus(): #@save\n",
" \"\"\"返回所有可用的GPU,如果没有GPU,则返回[cpu(),]\"\"\"\n",
" devices = [torch.device(f'cuda:{i}')\n",
" for i in range(torch.cuda.device_count())]\n",
" return devices if devices else [torch.device('cpu')]\n",
"\n",
"try_gpu(), try_gpu(10), try_all_gpus()"
]
},
{
"cell_type": "markdown",
"id": "034b0d3b",
"metadata": {
"origin_pos": 23
},
"source": [
"## 张量与GPU\n",
"\n",
"我们可以[**查询张量所在的设备。**]\n",
"默认情况下,张量是在CPU上创建的。\n"
]
},
{
"cell_type": "code",
"execution_count": 5,
"id": "f6ab0f26",
"metadata": {
"execution": {
"iopub.execute_input": "2023-08-18T06:58:07.994741Z",
"iopub.status.busy": "2023-08-18T06:58:07.994126Z",
"iopub.status.idle": "2023-08-18T06:58:07.999439Z",
"shell.execute_reply": "2023-08-18T06:58:07.998673Z"
},
"origin_pos": 25,
"tab": [
"pytorch"
]
},
"outputs": [
{
"data": {
"text/plain": [
"device(type='cpu')"
]
},
"execution_count": 5,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"x = torch.tensor([1, 2, 3])\n",
"x.device"
]
},
{
"cell_type": "markdown",
"id": "f39b0efa",
"metadata": {
"origin_pos": 28
},
"source": [
"需要注意的是,无论何时我们要对多个项进行操作,\n",
"它们都必须在同一个设备上。\n",
"例如,如果我们对两个张量求和,\n",
"我们需要确保两个张量都位于同一个设备上,\n",
"否则框架将不知道在哪里存储结果,甚至不知道在哪里执行计算。\n",
"\n",
"### [**存储在GPU上**]\n",
"\n",
"有几种方法可以在GPU上存储张量。\n",
"例如,我们可以在创建张量时指定存储设备。接\n",
"下来,我们在第一个`gpu`上创建张量变量`X`。\n",
"在GPU上创建的张量只消耗这个GPU的显存。\n",
"我们可以使用`nvidia-smi`命令查看显存使用情况。\n",
"一般来说,我们需要确保不创建超过GPU显存限制的数据。\n"
]
},
{
"cell_type": "code",
"execution_count": 6,
"id": "a67dbf2f",
"metadata": {
"execution": {
"iopub.execute_input": "2023-08-18T06:58:08.004162Z",
"iopub.status.busy": "2023-08-18T06:58:08.003541Z",
"iopub.status.idle": "2023-08-18T06:58:09.277879Z",
"shell.execute_reply": "2023-08-18T06:58:09.277008Z"
},
"origin_pos": 30,
"tab": [
"pytorch"
]
},
"outputs": [
{
"data": {
"text/plain": [
"tensor([[1., 1., 1.],\n",
" [1., 1., 1.]], device='cuda:0')"
]
},
"execution_count": 6,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"X = torch.ones(2, 3, device=try_gpu())\n",
"X"
]
},
{
"cell_type": "markdown",
"id": "dd17f6d7",
"metadata": {
"origin_pos": 33
},
"source": [
"假设我们至少有两个GPU,下面的代码将在(**第二个GPU上创建一个随机张量。**)\n"
]
},
{
"cell_type": "code",
"execution_count": 7,
"id": "7c0d4a84",
"metadata": {
"execution": {
"iopub.execute_input": "2023-08-18T06:58:09.282814Z",
"iopub.status.busy": "2023-08-18T06:58:09.282230Z",
"iopub.status.idle": "2023-08-18T06:58:10.279046Z",
"shell.execute_reply": "2023-08-18T06:58:10.278227Z"
},
"origin_pos": 35,
"tab": [
"pytorch"
]
},
"outputs": [
{
"data": {
"text/plain": [
"tensor([[0.4860, 0.1285, 0.0440],\n",
" [0.9743, 0.4159, 0.9979]], device='cuda:1')"
]
},
"execution_count": 7,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"Y = torch.rand(2, 3, device=try_gpu(1))\n",
"Y"
]
},
{
"cell_type": "markdown",
"id": "71646fa2",
"metadata": {
"origin_pos": 38
},
"source": [
"### 复制\n",
"\n",
"如果我们[**要计算`X + Y`,我们需要决定在哪里执行这个操作**]。\n",
"例如,如 :numref:`fig_copyto`所示,\n",
"我们可以将`X`传输到第二个GPU并在那里执行操作。\n",
"*不要*简单地`X`加上`Y`,因为这会导致异常,\n",
"运行时引擎不知道该怎么做:它在同一设备上找不到数据会导致失败。\n",
"由于`Y`位于第二个GPU上,所以我们需要将`X`移到那里,\n",
"然后才能执行相加运算。\n",
"\n",
"![复制数据以在同一设备上执行操作](../img/copyto.svg)\n",
":label:`fig_copyto`\n"
]
},
{
"cell_type": "code",
"execution_count": 8,
"id": "9e700cd2",
"metadata": {
"execution": {
"iopub.execute_input": "2023-08-18T06:58:10.284097Z",
"iopub.status.busy": "2023-08-18T06:58:10.283529Z",
"iopub.status.idle": "2023-08-18T06:58:10.290795Z",
"shell.execute_reply": "2023-08-18T06:58:10.290007Z"
},
"origin_pos": 40,
"tab": [
"pytorch"
]
},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"tensor([[1., 1., 1.],\n",
" [1., 1., 1.]], device='cuda:0')\n",
"tensor([[1., 1., 1.],\n",
" [1., 1., 1.]], device='cuda:1')\n"
]
}
],
"source": [
"Z = X.cuda(1)\n",
"print(X)\n",
"print(Z)"
]
},
{
"cell_type": "markdown",
"id": "f57eab12",
"metadata": {
"origin_pos": 42
},
"source": [
"[**现在数据在同一个GPU上(`Z`和`Y`都在),我们可以将它们相加。**]\n"
]
},
{
"cell_type": "code",
"execution_count": 9,
"id": "b2f04f35",
"metadata": {
"execution": {
"iopub.execute_input": "2023-08-18T06:58:10.295377Z",
"iopub.status.busy": "2023-08-18T06:58:10.294845Z",
"iopub.status.idle": "2023-08-18T06:58:10.301122Z",
"shell.execute_reply": "2023-08-18T06:58:10.300297Z"
},
"origin_pos": 43,
"tab": [
"pytorch"
]
},
"outputs": [
{
"data": {
"text/plain": [
"tensor([[1.4860, 1.1285, 1.0440],\n",
" [1.9743, 1.4159, 1.9979]], device='cuda:1')"
]
},
"execution_count": 9,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"Y + Z"
]
},
{
"cell_type": "markdown",
"id": "9acbe573",
"metadata": {
"origin_pos": 45,
"tab": [
"pytorch"
]
},
"source": [
"假设变量`Z`已经存在于第二个GPU上。\n",
"如果我们还是调用`Z.cuda(1)`会发生什么?\n",
"它将返回`Z`,而不会复制并分配新内存。\n"
]
},
{
"cell_type": "code",
"execution_count": 10,
"id": "d6b95aa1",
"metadata": {
"execution": {
"iopub.execute_input": "2023-08-18T06:58:10.305143Z",
"iopub.status.busy": "2023-08-18T06:58:10.304592Z",
"iopub.status.idle": "2023-08-18T06:58:10.309707Z",
"shell.execute_reply": "2023-08-18T06:58:10.308894Z"
},
"origin_pos": 48,
"tab": [
"pytorch"
]
},
"outputs": [
{
"data": {
"text/plain": [
"True"
]
},
"execution_count": 10,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"Z.cuda(1) is Z"
]
},
{
"cell_type": "markdown",
"id": "35568455",
"metadata": {
"origin_pos": 50
},
"source": [
"### 旁注\n",
"\n",
"人们使用GPU来进行机器学习,因为单个GPU相对运行速度快。\n",
"但是在设备(CPU、GPU和其他机器)之间传输数据比计算慢得多。\n",
"这也使得并行化变得更加困难,因为我们必须等待数据被发送(或者接收),\n",
"然后才能继续进行更多的操作。\n",
"这就是为什么拷贝操作要格外小心。\n",
"根据经验,多个小操作比一个大操作糟糕得多。\n",
"此外,一次执行几个操作比代码中散布的许多单个操作要好得多。\n",
"如果一个设备必须等待另一个设备才能执行其他操作,\n",
"那么这样的操作可能会阻塞。\n",
"这有点像排队订购咖啡,而不像通过电话预先订购:\n",
"当客人到店的时候,咖啡已经准备好了。\n",
"\n",
"最后,当我们打印张量或将张量转换为NumPy格式时,\n",
"如果数据不在内存中,框架会首先将其复制到内存中,\n",
"这会导致额外的传输开销。\n",
"更糟糕的是,它现在受制于全局解释器锁,使得一切都得等待Python完成。\n",
"\n",
"## [**神经网络与GPU**]\n",
"\n",
"类似地,神经网络模型可以指定设备。\n",
"下面的代码将模型参数放在GPU上。\n"
]
},
{
"cell_type": "code",
"execution_count": 11,
"id": "587af904",
"metadata": {
"execution": {
"iopub.execute_input": "2023-08-18T06:58:10.313163Z",
"iopub.status.busy": "2023-08-18T06:58:10.312623Z",
"iopub.status.idle": "2023-08-18T06:58:10.336351Z",
"shell.execute_reply": "2023-08-18T06:58:10.335568Z"
},
"origin_pos": 52,
"tab": [
"pytorch"
]
},
"outputs": [],
"source": [
"net = nn.Sequential(nn.Linear(3, 1))\n",
"net = net.to(device=try_gpu())"
]
},
{
"cell_type": "markdown",
"id": "a834a04c",
"metadata": {
"origin_pos": 55
},
"source": [
"在接下来的几章中,\n",
"我们将看到更多关于如何在GPU上运行模型的例子,\n",
"因为它们将变得更加计算密集。\n",
"\n",
"当输入为GPU上的张量时,模型将在同一GPU上计算结果。\n"
]
},
{
"cell_type": "code",
"execution_count": 12,
"id": "955f7f67",
"metadata": {
"execution": {
"iopub.execute_input": "2023-08-18T06:58:10.340989Z",
"iopub.status.busy": "2023-08-18T06:58:10.340312Z",
"iopub.status.idle": "2023-08-18T06:58:10.930969Z",
"shell.execute_reply": "2023-08-18T06:58:10.930143Z"
},
"origin_pos": 56,
"tab": [
"pytorch"
]
},
"outputs": [
{
"data": {
"text/plain": [
"tensor([[-0.4275],\n",
" [-0.4275]], device='cuda:0', grad_fn=<AddmmBackward0>)"
]
},
"execution_count": 12,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"net(X)"
]
},
{
"cell_type": "markdown",
"id": "fb9f9aef",
"metadata": {
"origin_pos": 57
},
"source": [
"让我们(**确认模型参数存储在同一个GPU上。**)\n"
]
},
{
"cell_type": "code",
"execution_count": 13,
"id": "bd727993",
"metadata": {
"execution": {
"iopub.execute_input": "2023-08-18T06:58:10.935087Z",
"iopub.status.busy": "2023-08-18T06:58:10.934497Z",
"iopub.status.idle": "2023-08-18T06:58:10.939740Z",
"shell.execute_reply": "2023-08-18T06:58:10.938974Z"
},
"origin_pos": 59,
"tab": [
"pytorch"
]
},
"outputs": [
{
"data": {
"text/plain": [
"device(type='cuda', index=0)"
]
},
"execution_count": 13,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"net[0].weight.data.device"
]
},
{
"cell_type": "markdown",
"id": "cf1bf3b2",
"metadata": {
"origin_pos": 62
},
"source": [
"总之,只要所有的数据和参数都在同一个设备上,\n",
"我们就可以有效地学习模型。\n",
"在下面的章节中,我们将看到几个这样的例子。\n",
"\n",
"## 小结\n",
"\n",
"* 我们可以指定用于存储和计算的设备,例如CPU或GPU。默认情况下,数据在主内存中创建,然后使用CPU进行计算。\n",
"* 深度学习框架要求计算的所有输入数据都在同一设备上,无论是CPU还是GPU。\n",
"* 不经意地移动数据可能会显著降低性能。一个典型的错误如下:计算GPU上每个小批量的损失,并在命令行中将其报告给用户(或将其记录在NumPy `ndarray`中)时,将触发全局解释器锁,从而使所有GPU阻塞。最好是为GPU内部的日志分配内存,并且只移动较大的日志。\n",
"\n",
"## 练习\n",
"\n",
"1. 尝试一个计算量更大的任务,比如大矩阵的乘法,看看CPU和GPU之间的速度差异。再试一个计算量很小的任务呢?\n",
"1. 我们应该如何在GPU上读写模型参数?\n",
"1. 测量计算1000个$100 \\times 100$矩阵的矩阵乘法所需的时间,并记录输出矩阵的Frobenius范数,一次记录一个结果,而不是在GPU上保存日志并仅传输最终结果。\n",
"1. 测量同时在两个GPU上执行两个矩阵乘法与在一个GPU上按顺序执行两个矩阵乘法所需的时间。提示:应该看到近乎线性的缩放。\n"
]
},
{
"cell_type": "markdown",
"id": "0460f3be",
"metadata": {
"origin_pos": 64,
"tab": [
"pytorch"
]
},
"source": [
"[Discussions](https://discuss.d2l.ai/t/1841)\n"
]
}
],
"metadata": {
"language_info": {
"name": "python"
},
"required_libs": []
},
"nbformat": 4,
"nbformat_minor": 5
}
@@ -0,0 +1,159 @@
{
"cells": [
{
"cell_type": "markdown",
"id": "6262ada7",
"metadata": {
"origin_pos": 0
},
"source": [
"# 安装\n",
":label:`chap_installation`\n",
"\n",
"我们需要配置一个环境来运行 Python、Jupyter Notebook、相关库以及运行本书所需的代码,以快速入门并获得动手学习经验。\n",
"\n",
"## 安装 Miniconda\n",
"\n",
"最简单的方法就是安装依赖Python 3.x的[Miniconda](https://conda.io/en/latest/miniconda.html)。\n",
"如果已安装conda,则可以跳过以下步骤。访问Miniconda网站,根据Python3.x版本确定适合的版本。\n",
"\n",
"如果我们使用macOS,假设Python版本是3.9(我们的测试版本),将下载名称包含字符串“MacOSX”的bash脚本,并执行以下操作:\n",
"\n",
"```bash\n",
"# 以Intel处理器为例,文件名可能会更改\n",
"sh Miniconda3-py39_4.12.0-MacOSX-x86_64.sh -b\n",
"```\n",
"\n",
"如果我们使用Linux,假设Python版本是3.9(我们的测试版本),将下载名称包含字符串“Linux”的bash脚本,并执行以下操作:\n",
"\n",
"```bash\n",
"# 文件名可能会更改\n",
"sh Miniconda3-py39_4.12.0-Linux-x86_64.sh -b\n",
"```\n",
"\n",
"接下来,初始化终端Shell,以便我们可以直接运行`conda`。\n",
"\n",
"```bash\n",
"~/miniconda3/bin/conda init\n",
"```\n",
"\n",
"现在关闭并重新打开当前的shell。并使用下面的命令创建一个新的环境:\n",
"\n",
"```bash\n",
"conda create --name d2l python=3.9 -y\n",
"```\n",
"\n",
"现在激活 `d2l` 环境:\n",
"\n",
"```bash\n",
"conda activate d2l\n",
"```\n",
"\n",
"## 安装深度学习框架和`d2l`软件包\n",
"\n",
"在安装深度学习框架之前,请先检查计算机上是否有可用的GPU。\n",
"例如可以查看计算机是否装有NVIDIA GPU并已安装[CUDA](https://developer.nvidia.com/cuda-downloads)。\n",
"如果机器没有任何GPU,没有必要担心,因为CPU在前几章完全够用。\n",
"但是,如果想流畅地学习全部章节,请提早获取GPU并且安装深度学习框架的GPU版本。\n"
]
},
{
"cell_type": "markdown",
"id": "ed0912e5",
"metadata": {
"origin_pos": 2,
"tab": [
"pytorch"
]
},
"source": [
"我们可以按如下方式安装PyTorch的CPU或GPU版本:\n",
"\n",
"```bash\n",
"pip install torch==1.12.0\n",
"pip install torchvision==0.13.0\n",
"```\n"
]
},
{
"cell_type": "markdown",
"id": "4e508102",
"metadata": {
"origin_pos": 5
},
"source": [
"我们的下一步是安装`d2l`包,以方便调取本书中经常使用的函数和类:\n",
"\n",
"```bash\n",
"pip install d2l==0.17.6\n",
"```\n",
"\n",
"## 下载 D2L Notebook\n",
"\n",
"接下来,需要下载这本书的代码。\n",
"可以点击本书HTML页面顶部的“Jupyter 记事本”选项下载后解压代码,或者可以按照如下方式进行下载:\n"
]
},
{
"cell_type": "markdown",
"id": "d6f62ffb",
"metadata": {
"origin_pos": 7,
"tab": [
"pytorch"
]
},
"source": [
"```bash\n",
"mkdir d2l-zh && cd d2l-zh\n",
"curl https://zh-v2.d2l.ai/d2l-zh-2.0.0.zip -o d2l-zh.zip\n",
"unzip d2l-zh.zip && rm d2l-zh.zip\n",
"cd pytorch\n",
"```\n",
"\n",
"\n",
"注意:如果没有安装`unzip`,则可以通过运行`sudo apt install unzip`进行安装。\n"
]
},
{
"cell_type": "markdown",
"id": "668ab9cb",
"metadata": {
"origin_pos": 10
},
"source": [
"安装完成后我们可以通过运行以下命令打开Jupyter笔记本(在Window系统的命令行窗口中运行以下命令前,需先将当前路径定位到刚下载的本书代码解压后的目录):\n",
"\n",
"```bash\n",
"jupyter notebook\n",
"```\n",
"\n",
"现在可以在Web浏览器中打开<http://localhost:8888>(通常会自动打开)。\n",
"由此,我们可以运行这本书中每个部分的代码。\n",
"在运行书籍代码、更新深度学习框架或`d2l`软件包之前,请始终执行`conda activate d2l`以激活运行时环境。\n",
"要退出环境,请运行`conda deactivate`。\n"
]
},
{
"cell_type": "markdown",
"id": "04be90e9",
"metadata": {
"origin_pos": 12,
"tab": [
"pytorch"
]
},
"source": [
"[Discussions](https://discuss.d2l.ai/t/2083)\n"
]
}
],
"metadata": {
"language_info": {
"name": "python"
},
"required_libs": []
},
"nbformat": 4,
"nbformat_minor": 5
}
@@ -0,0 +1,767 @@
{
"cells": [
{
"cell_type": "markdown",
"id": "ac2906b8",
"metadata": {
"origin_pos": 0
},
"source": [
"# 引言\n",
":label:`chap_introduction`\n",
"\n",
"时至今日,人们常用的计算机程序几乎都是软件开发人员从零编写的。\n",
"比如,现在开发人员要编写一个程序来管理网上商城。\n",
"经过思考,开发人员可能提出如下一个解决方案:\n",
"首先,用户通过Web浏览器(或移动应用程序)与应用程序进行交互;\n",
"紧接着,应用程序与数据库引擎进行交互,以保存交易历史记录并跟踪每个用户的动态;\n",
"其中,这个应用程序的核心——“业务逻辑”,详细说明了应用程序在各种情况下进行的操作。\n",
"\n",
"为了完善业务逻辑,开发人员必须细致地考虑应用程序所有可能遇到的边界情况,并为这些边界情况设计合适的规则。\n",
"当买家单击将商品添加到购物车时,应用程序会向购物车数据库表中添加一个条目,将该用户ID与商品ID关联起来。\n",
"虽然一次编写出完美应用程序的可能性微乎其微,但在大多数情况下,开发人员可以从上述的业务逻辑出发,编写出符合业务逻辑的应用程序,并不断测试直到满足用户的需求。\n",
"根据业务逻辑设计自动化系统,驱动正常运行的产品和系统,是一个人类认知上的非凡壮举。\n",
"\n",
"幸运的是,对日益壮大的机器学习科学家群体来说,实现很多任务的自动化并不再屈从于人类所能考虑到的逻辑。\n",
"想象一下,假如开发人员要试图解决以下问题之一:\n",
"\n",
"* 编写一个应用程序,接受地理信息、卫星图像和一些历史天气信息,并预测明天的天气;\n",
"* 编写一个应用程序,接受自然文本表示的问题,并正确回答该问题;\n",
"* 编写一个应用程序,接受一张图像,识别出该图像所包含的人,并在每个人周围绘制轮廓;\n",
"* 编写一个应用程序,向用户推荐他们可能喜欢,但在自然浏览过程中不太可能遇到的产品。\n",
"\n",
"在这些情况下,即使是顶级程序员也无法提出完美的解决方案,\n",
"原因可能各不相同。有时任务可能遵循一种随着时间推移而变化的模式,我们需要程序来自动调整。\n",
"有时任务内的关系可能太复杂(比如像素和抽象类别之间的关系),需要数千或数百万次的计算。\n",
"即使人类的眼睛能毫不费力地完成这些难以提出完美解决方案的任务,这其中的计算也超出了人类意识理解范畴。\n",
"*机器学习*machine learning,ML)是一类强大的可以从经验中学习的技术。\n",
"通常采用观测数据或与环境交互的形式,机器学习算法会积累更多的经验,其性能也会逐步提高。\n",
"相反,对于刚刚所说的电子商务平台,如果它一直执行相同的业务逻辑,无论积累多少经验,都不会自动提高,除非开发人员认识到问题并更新软件。\n",
"本书将带读者开启机器学习之旅,并特别关注*深度学习*(deep learningDL)的基础知识。\n",
"深度学习是一套强大的技术,它可以推动计算机视觉、自然语言处理、医疗保健和基因组学等不同领域的创新。\n",
"\n",
"## 日常生活中的机器学习\n",
"\n",
"机器学习应用在日常生活中的方方面面。\n",
"现在,假设本书的作者们一起驱车去咖啡店。\n",
"阿斯顿拿起一部iPhone,对它说道:“Hey Siri!”手机的语音识别系统就被唤醒了。\n",
"接着,李沐对Siri说道:“去星巴克咖啡店。”语音识别系统就自动触发语音转文字功能,并启动地图应用程序,\n",
"地图应用程序在启动后筛选了若干条路线,每条路线都显示了预计的通行时间......\n",
"由此可见,机器学习渗透在生活中的方方面面,在短短几秒钟的时间里,人们与智能手机的日常互动就可以涉及几种机器学习模型。\n",
"\n",
"现在,假如需要我们编写程序来响应一个“唤醒词”(比如“Alexa”“小爱同学”和“Hey Siri”)。\n",
"我们试着用一台计算机和一个代码编辑器编写代码,如 :numref:`fig_wake_word`中所示。\n",
"问题看似很难解决:麦克风每秒钟将收集大约44000个样本,每个样本都是声波振幅的测量值。而该测量值与唤醒词难以直接关联。那又该如何编写程序,令其输入麦克风采集到的原始音频片段,输出$\\{\\text{是}, \\text{否}\\}$(表示该片段是否包含唤醒词)的可靠预测呢?我们对编写这个程序毫无头绪,这就是需要机器学习的原因。\n",
"\n",
"![识别唤醒词](../img/wake-word.svg)\n",
":label:`fig_wake_word`\n",
"\n",
"通常,即使我们不知道怎样明确地告诉计算机如何从输入映射到输出,大脑仍然能够自己执行认知功能。\n",
"换句话说,即使我们不知道如何编写计算机程序来识别“Alexa”这个词,大脑自己也能够识别它。\n",
"有了这一能力,我们就可以收集一个包含大量音频样本的*数据集*(dataset),并对包含和不包含唤醒词的样本进行标记。\n",
"利用机器学习算法,我们不需要设计一个“明确地”识别唤醒词的系统。\n",
"相反,我们只需要定义一个灵活的程序算法,其输出由许多*参数*(parameter)决定,然后使用数据集来确定当下的“最佳参数集”,这些参数通过某种性能度量方式来达到完成任务的最佳性能。\n",
"\n",
"那么到底什么是参数呢?\n",
"参数可以被看作旋钮,旋钮的转动可以调整程序的行为。\n",
"任一调整参数后的程序被称为*模型*model)。\n",
"通过操作参数而生成的所有不同程序(输入-输出映射)的集合称为“模型族”。\n",
"使用数据集来选择参数的元程序被称为*学习算法*(learning algorithm)。\n",
"\n",
"在开始用机器学习算法解决问题之前,我们必须精确地定义问题,确定*输入*(input)和*输出*(output)的性质,并选择合适的模型族。\n",
"在本例中,模型接收一段音频作为输入,然后在是或否中生成一个选择作为输出。\n",
"如果一切顺利,经过一番训练,模型对于“片段是否包含唤醒词”的预测通常是正确的。\n",
"\n",
"现在模型每次听到“Alexa”这个词时都会发出“是”的声音。\n",
"由于这里的唤醒词是任意选择的自然语言,因此我们可能需要一个足够丰富的模型族,使模型多元化。\n",
"比如,模型族的另一个模型只在听到“Hey Siri”这个词时发出“是”。\n",
"理想情况下,同一个模型族应该适合于“Alexa”识别和“Hey Siri”识别,因为从直觉上看,它们似乎是相似的任务。\n",
"然而,如果我们想处理完全不同的输入或输出,比如:从图像映射到字幕,或从英语映射到中文,可能需要一个完全不同的模型族。\n",
"\n",
"但如果模型所有的按钮(模型参数)都被随机设置,就不太可能识别出“Alexa”“Hey Siri”或任何其他单词。\n",
"在机器学习中,*学习*(learning)是一个训练模型的过程。\n",
"通过这个过程,我们可以发现正确的参数集,从而使模型强制执行所需的行为。\n",
"换句话说,我们用数据*训练*train)模型。\n",
"如 :numref:`fig_ml_loop`所示,训练过程通常包含如下步骤:\n",
"\n",
"1. 从一个随机初始化参数的模型开始,这个模型基本没有“智能”;\n",
"1. 获取一些数据样本(例如,音频片段以及对应的是或否标签);\n",
"1. 调整参数,使模型在这些样本中表现得更好;\n",
"1. 重复第(2)步和第(3)步,直到模型在任务中的表现令人满意。\n",
"\n",
"![一个典型的训练过程](../img/ml-loop.svg)\n",
":label:`fig_ml_loop`\n",
"\n",
"总而言之,我们没有编写唤醒词识别器,而是编写了一个“学习”程序。\n",
"如果我们用一个巨大的带标签的数据集,它很可能可以“学习”识别唤醒词。\n",
"这种“通过用数据集来确定程序行为”的方法可以被看作*用数据编程*(programming with data)。\n",
"比如,我们可以通过向机器学习系统,提供许多猫和狗的图片来设计一个“猫图检测器”。\n",
"检测器最终可以学会:如果输入是猫的图片就输出一个非常大的正数,如果输入是狗的图片就会输出一个非常小的负数。\n",
"如果检测器不确定输入的图片中是猫还是狗,它会输出接近于零的数......\n",
"这个例子仅仅是机器学习常见应用的冰山一角,\n",
"而深度学习是机器学习的一个主要分支,本节稍后的内容将对其进行更详细的解析。\n",
"\n",
"## 机器学习中的关键组件\n",
"\n",
"首先介绍一些核心组件。无论什么类型的机器学习问题,都会遇到这些组件:\n",
"\n",
"1. 可以用来学习的*数据*data);\n",
"1. 如何转换数据的*模型*model);\n",
"1. 一个*目标函数*objective function),用来量化模型的有效性;\n",
"1. 调整模型参数以优化目标函数的*算法*algorithm)。\n",
"\n",
"### 数据\n",
"\n",
"毋庸置疑,如果没有数据,那么数据科学毫无用武之地。\n",
"每个数据集由一个个*样本*example, sample)组成,大多时候,它们遵循独立同分布(independently and identically distributed, i.i.d.)。\n",
"样本有时也叫做*数据点*(data point)或者*数据实例*data instance),通常每个样本由一组称为*特征*(features,或*协变量*covariates))的属性组成。\n",
"机器学习模型会根据这些属性进行预测。\n",
"在上面的监督学习问题中,要预测的是一个特殊的属性,它被称为*标签*(label,或*目标*target))。\n",
"\n",
"当处理图像数据时,每一张单独的照片即为一个样本,它的特征由每个像素数值的有序列表表示。\n",
"比如,$200\\times200$彩色照片由$200\\times200\\times3=120000$个数值组成,其中的“3”对应于每个空间位置的红、绿、蓝通道的强度。\n",
"再比如,对于一组医疗数据,给定一组标准的特征(如年龄、生命体征和诊断),此数据可以用来尝试预测患者是否会存活。\n",
"\n",
"当每个样本的特征类别数量都是相同的时候,其特征向量是固定长度的,这个长度被称为数据的*维数*(dimensionality)。\n",
"固定长度的特征向量是一个方便的属性,它可以用来量化学习大量样本。\n",
"\n",
"然而,并不是所有的数据都可以用“固定长度”的向量表示。\n",
"以图像数据为例,如果它们全部来自标准显微镜设备,那么“固定长度”是可取的;\n",
"但是如果图像数据来自互联网,它们很难具有相同的分辨率或形状。\n",
"这时,将图像裁剪成标准尺寸是一种方法,但这种办法很局限,有丢失信息的风险。\n",
"此外,文本数据更不符合“固定长度”的要求。\n",
"比如,对于亚马逊等电子商务网站上的客户评论,有些文本数据很简短(比如“好极了”),有些则长篇大论。\n",
"与传统机器学习方法相比,深度学习的一个主要优势是可以处理不同长度的数据。\n",
"\n",
"一般来说,拥有越多数据的时候,工作就越容易。\n",
"更多的数据可以被用来训练出更强大的模型,从而减少对预先设想假设的依赖。\n",
"数据集的由小变大为现代深度学习的成功奠定基础。\n",
"在没有大数据集的情况下,许多令人兴奋的深度学习模型黯然失色。\n",
"就算一些深度学习模型在小数据集上能够工作,但其效能并不比传统方法高。\n",
"\n",
"请注意,仅仅拥有海量的数据是不够的,我们还需要正确的数据。\n",
"如果数据中充满了错误,或者如果数据的特征不能预测任务目标,那么模型很可能无效。\n",
"有一句古语很好地反映了这个现象:“输入的是垃圾,输出的也是垃圾。”(“Garbage in, garbage out.”)\n",
"此外,糟糕的预测性能甚至会加倍放大事态的严重性。\n",
"在一些敏感应用中,如预测性监管、简历筛选和用于贷款的风险模型,我们必须特别警惕垃圾数据带来的后果。\n",
"一种常见的问题来自不均衡的数据集,比如在一个有关医疗的训练数据集中,某些人群没有样本表示。\n",
"想象一下,假设我们想要训练一个皮肤癌识别模型,但它(在训练数据集中)从未“见过”黑色皮肤的人群,这个模型就会顿时束手无策。\n",
"\n",
"再比如,如果用“过去的招聘决策数据”来训练一个筛选简历的模型,那么机器学习模型可能会无意中捕捉到历史残留的不公正,并将其自动化。\n",
"然而,这一切都可能在不知情的情况下发生。\n",
"因此,当数据不具有充分代表性,甚至包含了一些社会偏见时,模型就很有可能有偏见。\n",
"\n",
"\n",
"### 模型\n",
"\n",
"大多数机器学习会涉及到数据的转换。\n",
"比如一个“摄取照片并预测笑脸”的系统。再比如通过摄取到的一组传感器读数预测读数的正常与异常程度。\n",
"虽然简单的模型能够解决如上简单的问题,但本书中关注的问题超出了经典方法的极限。\n",
"深度学习与经典方法的区别主要在于:前者关注的功能强大的模型,这些模型由神经网络错综复杂的交织在一起,包含层层数据转换,因此被称为*深度学习*(deep learning)。\n",
"在讨论深度模型的过程中,本书也将提及一些传统方法。\n",
"\n",
"\n",
"### 目标函数\n",
"\n",
"前面的内容将机器学习介绍为“从经验中学习”。\n",
"这里所说的“学习”,是指自主提高模型完成某些任务的效能。\n",
"但是,什么才算真正的提高呢?\n",
"在机器学习中,我们需要定义模型的优劣程度的度量,这个度量在大多数情况是“可优化”的,这被称之为*目标函数*(objective function)。\n",
"我们通常定义一个目标函数,并希望优化它到最低点。\n",
"因为越低越好,所以这些函数有时被称为*损失函数*(loss function,或cost function)。\n",
"但这只是一个惯例,我们也可以取一个新的函数,优化到它的最高点。\n",
"这两个函数本质上是相同的,只是翻转一下符号。\n",
"\n",
"当任务在试图预测数值时,最常见的损失函数是*平方误差*(squared error),即预测值与实际值之差的平方。\n",
"当试图解决分类问题时,最常见的目标函数是最小化错误率,即预测与实际情况不符的样本比例。\n",
"有些目标函数(如平方误差)很容易被优化,有些目标(如错误率)由于不可微性或其他复杂性难以直接优化。\n",
"在这些情况下,通常会优化*替代目标*。\n",
"\n",
"通常,损失函数是根据模型参数定义的,并取决于数据集。\n",
"在一个数据集上,我们可以通过最小化总损失来学习模型参数的最佳值。\n",
"该数据集由一些为训练而收集的样本组成,称为*训练数据集*(training dataset,或称为*训练集*training set))。\n",
"然而,在训练数据上表现良好的模型,并不一定在“新数据集”上有同样的性能,这里的“新数据集”通常称为*测试数据集*(test dataset,或称为*测试集*test set))。\n",
"\n",
"综上所述,可用数据集通常可以分成两部分:训练数据集用于拟合模型参数,测试数据集用于评估拟合的模型。\n",
"然后我们观察模型在这两部分数据集的性能。\n",
"“一个模型在训练数据集上的性能”可以被想象成“一个学生在模拟考试中的分数”。\n",
"这个分数用来为一些真正的期末考试做参考,即使成绩令人鼓舞,也不能保证期末考试成功。\n",
"换言之,测试性能可能会显著偏离训练性能。\n",
"当一个模型在训练集上表现良好,但不能推广到测试集时,这个模型被称为*过拟合*(overfitting)的。\n",
"就像在现实生活中,尽管模拟考试考得很好,真正的考试不一定百发百中。\n",
"\n",
"\n",
"### 优化算法\n",
"\n",
"当我们获得了一些数据源及其表示、一个模型和一个合适的损失函数,接下来就需要一种算法,它能够搜索出最佳参数,以最小化损失函数。\n",
"深度学习中,大多流行的优化算法通常基于一种基本方法--*梯度下降*gradient descent)。\n",
"简而言之,在每个步骤中,梯度下降法都会检查每个参数,看看如果仅对该参数进行少量变动,训练集损失会朝哪个方向移动。\n",
"然后,它在可以减少损失的方向上优化参数。\n",
"\n",
"\n",
"## 各种机器学习问题\n",
"\n",
"在机器学习的广泛应用中,唤醒词问题只是冰山一角。\n",
"前面唤醒词识别的例子,只是机器学习可以解决的众多问题中的一个。\n",
"下面将列出一些常见的机器学习问题和应用,为之后本书的讨论做铺垫。\n",
"接下来会经常引用前面提到的概念,如数据、模型和优化算法。\n",
"\n",
"### 监督学习\n",
"\n",
"*监督学习*supervised learning)擅长在“给定输入特征”的情况下预测标签。\n",
"每个“特征-标签”对都称为一个*样本*example)。\n",
"有时,即使标签是未知的,样本也可以指代输入特征。\n",
"我们的目标是生成一个模型,能够将任何输入特征映射到标签(即预测)。\n",
"\n",
"举一个具体的例子:\n",
"假设我们需要预测患者的心脏病是否会发作,那么观察结果“心脏病发作”或“心脏病没有发作”将是样本的标签。\n",
"输入特征可能是生命体征,如心率、舒张压和收缩压等。\n",
"\n",
"监督学习之所以能发挥作用,是因为在训练参数时,我们为模型提供了一个数据集,其中每个样本都有真实的标签。\n",
"用概率论术语来说,我们希望预测“估计给定输入特征的标签”的条件概率。\n",
"虽然监督学习只是几大类机器学习问题之一,但是在工业中,大部分机器学习的成功应用都使用了监督学习。\n",
"这是因为在一定程度上,许多重要的任务可以清晰地描述为,在给定一组特定的可用数据的情况下,估计未知事物的概率。比如:\n",
"\n",
"* 根据计算机断层扫描(Computed TomographyCT)肿瘤图像,预测是否为癌症;\n",
"* 给出一个英语句子,预测正确的法语翻译;\n",
"* 根据本月的财务报告数据,预测下个月股票的价格;\n",
"\n",
"监督学习的学习过程一般可以分为三大步骤:\n",
"\n",
"1. 从已知大量数据样本中随机选取一个子集,为每个样本获取真实标签。有时,这些样本已有标签(例如,患者是否在下一年内康复?);有时,这些样本可能需要被人工标记(例如,图像分类)。这些输入和相应的标签一起构成了训练数据集;\n",
"2. 选择有监督的学习算法,它将训练数据集作为输入,并输出一个“已完成学习的模型”;\n",
"3. 将之前没有见过的样本特征放到这个“已完成学习的模型”中,使用模型的输出作为相应标签的预测。\n",
"\n",
"整个监督学习过程如 :numref:`fig_supervised_learning` 所示。\n",
"\n",
"![监督学习](../img/supervised-learning.svg)\n",
":label:`fig_supervised_learning`\n",
"\n",
"综上所述,即使使用简单的描述给定输入特征的预测标签,监督学习也可以采取多种形式的模型,并且需要大量不同的建模决策,这取决于输入和输出的类型、大小和数量。\n",
"例如,我们使用不同的模型来处理“任意长度的序列”或“固定长度的序列”。\n",
"\n",
"#### 回归\n",
"\n",
"*回归*regression)是最简单的监督学习任务之一。\n",
"假设有一组房屋销售数据表格,其中每行对应一个房子,每列对应一个相关的属性,例如房屋的面积、卧室的数量、浴室的数量以及到镇中心的步行距离,等等。\n",
"每一行的属性构成了一个房子样本的特征向量。\n",
"如果一个人住在纽约或旧金山,而且他不是亚马逊、谷歌、微软或Facebook的首席执行官,那么他家的特征向量(房屋面积,卧室数量,浴室数量,步行距离)可能类似于:$[600, 1, 1, 60]$。\n",
"如果一个人住在匹兹堡,这个特征向量可能更接近$[3000, 4, 3, 10]$......\n",
"当人们在市场上寻找新房子时,可能需要估计一栋房子的公平市场价值。\n",
"为什么这个任务可以归类为回归问题呢?本质上是输出决定的。\n",
"销售价格(即标签)是一个数值。\n",
"当标签取任意数值时,我们称之为*回归*问题,此时的目标是生成一个模型,使它的预测非常接近实际标签值。\n",
"\n",
"生活中的许多问题都可归类为回归问题。\n",
"比如,预测用户对一部电影的评分可以被归类为一个回归问题。\n",
"这里有一个小插曲:在2009年,如果有人设计了一个很棒的算法来预测电影评分,那可能会赢得[100万美元的奈飞奖](https://en.wikipedia.org/wiki/Netflix_Prize)。\n",
"再比如,预测病人在医院的住院时间也是一个回归问题。\n",
"总而言之,判断回归问题的一个很好的经验法则是,任何有关“有多少”的问题很可能就是回归问题。比如:\n",
"\n",
"* 这个手术需要多少小时;\n",
"* 在未来6小时,这个镇会有多少降雨量。\n",
"\n",
"即使你以前从未使用过机器学习,可能在不经意间,已经解决了一些回归问题。\n",
"例如,你让人修理了排水管,承包商花了3小时清除污水管道中的污物,然后他寄给你一张350美元的账单。\n",
"而你的朋友雇了同一个承包商2小时,他收到了250美元的账单。\n",
"如果有人请你估算清理污物的费用,你可以假设承包商收取一些基本费用,然后按小时收费。\n",
"如果这些假设成立,那么给出这两个数据样本,你就已经可以确定承包商的定价结构:50美元上门服务费,另外每小时100美元。\n",
"在不经意间,你就已经理解并应用了线性回归算法。\n",
"\n",
"然而,以上假设有时并不可取。\n",
"例如,一些差异是由于两个特征之外的几个因素造成的。\n",
"在这些情况下,我们将尝试学习最小化“预测值和实际标签值的差异”的模型。\n",
"本书大部分章节将关注平方误差损失函数的最小化。\n",
"\n",
"#### 分类\n",
"\n",
"虽然回归模型可以很好地解决“有多少”的问题,但是很多问题并非如此。\n",
"例如,一家银行希望在其移动应用程序中添加支票扫描功能。\n",
"具体地说,这款应用程序能够自动理解从图像中看到的文本,并将手写字符映射到对应的已知字符之上。\n",
"这种“哪一个”的问题叫做*分类*classification)问题。\n",
"*分类*问题希望模型能够预测样本属于哪个*类别*(category,正式称为*类*class))。\n",
"例如,手写数字可能有10类,标签被设置为数字0~9。\n",
"最简单的分类问题是只有两类,这被称之为*二项分类*binomial classification)。\n",
"例如,数据集可能由动物图像组成,标签可能是$\\mathrm{\\{猫, 狗\\}}$两类。\n",
"回归是训练一个回归函数来输出一个数值;\n",
"分类是训练一个分类器来输出预测的类别。\n",
"\n",
"然而模型怎么判断得出这种“是”或“不是”的硬分类预测呢?\n",
"我们可以试着用概率语言来理解模型。\n",
"给定一个样本特征,模型为每个可能的类分配一个概率。\n",
"比如,之前的猫狗分类例子中,分类器可能会输出图像是猫的概率为0.9。\n",
"0.9这个数字表达什么意思呢?\n",
"可以这样理解:分类器90%确定图像描绘的是一只猫。\n",
"预测类别的概率的大小传达了一种模型的不确定性,本书后面章节将讨论其他运用不确定性概念的算法。\n",
"\n",
"当有两个以上的类别时,我们把这个问题称为*多项分类*multiclass classification)问题。\n",
"常见的例子包括手写字符识别 $\\mathrm{\\{0, 1, 2, ... 9, a, b, c, ...\\}}$。\n",
"与解决回归问题不同,分类问题的常见损失函数被称为*交叉熵*(cross-entropy),本书 :numref:`sec_softmax` 将详细阐述。\n",
"\n",
"请注意,最常见的类别不一定是最终用于决策的类别。\n",
"举个例子,假设后院有一个如 :numref:`fig_death_cap` 所示的蘑菇。\n",
"\n",
"![死帽蕈——不能吃!!](../img/death-cap.jpg)\n",
":width:`200px`\n",
":label:`fig_death_cap`\n",
"\n",
"现在,我们想要训练一个毒蘑菇检测分类器,根据照片预测蘑菇是否有毒。\n",
"假设这个分类器输出 :numref:`fig_death_cap` 包含死帽蕈的概率是0.2。\n",
"换句话说,分类器80%确定图中的蘑菇不是死帽蕈。\n",
"尽管如此,我们也不会吃它,因为不值得冒20%的死亡风险。\n",
"换句话说,不确定风险的影响远远大于收益。\n",
"因此,我们需要将“预期风险”作为损失函数,即需要将结果的概率乘以与之相关的收益(或伤害)。\n",
"在这种情况下,食用蘑菇造成的损失为$0.2 \\times \\infty + 0.8 \\times 0 = \\infty$,而丢弃蘑菇的损失为$0.2 \\times 0 + 0.8 \\times 1 = 0.8$。\n",
"事实上,谨慎是有道理的, :numref:`fig_death_cap`中的蘑菇实际上是一个死帽蕈。\n",
"\n",
"分类可能变得比二项分类、多项分类复杂得多。\n",
"例如,有一些分类任务的变体可以用于寻找层次结构,层次结构假定在许多类之间存在某种关系。\n",
"因此,并不是所有的错误都是均等的。\n",
"人们宁愿错误地分入一个相关的类别,也不愿错误地分入一个遥远的类别,这通常被称为*层次分类*(hierarchical classification)。\n",
"早期的一个例子是[卡尔·林奈](https://en.wikipedia.org/wiki/Carl_Linnaeus),他对动物进行了层次分类。\n",
"\n",
"在动物分类的应用中,把一只狮子狗误认为雪纳瑞可能不会太糟糕。\n",
"但如果模型将狮子狗与恐龙混淆,就滑稽至极了。\n",
"层次结构相关性可能取决于模型的使用者计划如何使用模型。\n",
"例如,响尾蛇和乌梢蛇血缘上可能很接近,但如果把响尾蛇误认为是乌梢蛇可能会是致命的。\n",
"因为响尾蛇是有毒的,而乌梢蛇是无毒的。\n",
"\n",
"#### 标记问题\n",
"\n",
"有些分类问题很适合于二项分类或多项分类。\n",
"例如,我们可以训练一个普通的二项分类器来区分猫和狗。\n",
"运用最前沿的计算机视觉的算法,这个模型可以很轻松地被训练。\n",
"尽管如此,无论模型有多精确,当分类器遇到新的动物时可能会束手无策。\n",
"比如 :numref:`fig_stackedanimals`所示的这张“不来梅的城市音乐家”的图像 (这是一个流行的德国童话故事),图中有一只猫、一只公鸡、一只狗、一头驴,背景是一些树。\n",
"取决于我们最终想用模型做什么,将其视为二项分类问题可能没有多大意义。\n",
"取而代之,我们可能想让模型描绘输入图像的内容,一只猫、一只公鸡、一只狗,还有一头驴。\n",
"\n",
"![一只猫、一只公鸡、一只狗、一头驴](../img/stackedanimals.png)\n",
":width:`300px`\n",
":label:`fig_stackedanimals`\n",
"\n",
"学习预测不相互排斥的类别的问题称为*多标签分类*multi-label classification)。\n",
"举个例子,人们在技术博客上贴的标签,比如“机器学习”“技术”“小工具”“编程语言”“Linux”“云计算”“AWS”。\n",
"一篇典型的文章可能会用5~10个标签,因为这些概念是相互关联的。\n",
"关于“云计算”的帖子可能会提到“AWS”,而关于“机器学习”的帖子也可能涉及“编程语言”。\n",
"\n",
"此外,在处理生物医学文献时,我们也会遇到这类问题。\n",
"正确地标记文献很重要,有利于研究人员对文献进行详尽的审查。\n",
"在美国国家医学图书馆(The United States National Library of Medicine),一些专业的注释员会检查每一篇在PubMed中被索引的文章,以便将其与Mesh中的相关术语相关联(Mesh是一个大约有28000个标签的集合)。\n",
"这是一个十分耗时的过程,注释器通常在归档和标记之间有一年的延迟。\n",
"这里,机器学习算法可以提供临时标签,直到每一篇文章都有严格的人工审核。\n",
"事实上,近几年来,BioASQ组织已经[举办比赛](http://bioasq.org/)来完成这项工作。\n",
"\n",
"\n",
"#### 搜索\n",
"\n",
"有时,我们不仅仅希望输出一个类别或一个实值。\n",
"在信息检索领域,我们希望对一组项目进行排序。\n",
"以网络搜索为例,目标不是简单的“查询(query)-网页(page)”分类,而是在海量搜索结果中找到用户最需要的那部分。\n",
"搜索结果的排序也十分重要,学习算法需要输出有序的元素子集。\n",
"换句话说,如果要求我们输出字母表中的前5个字母,返回“A、B、C、D、E”和“C、A、B、E、D”是不同的。\n",
"即使结果集是相同的,集内的顺序有时却很重要。\n",
"\n",
"该问题的一种可能的解决方案:首先为集合中的每个元素分配相应的相关性分数,然后检索评级最高的元素。[PageRank](https://en.wikipedia.org/wiki/PageRank),谷歌搜索引擎背后最初的秘密武器就是这种评分系统的早期例子,但它的奇特之处在于它不依赖于实际的查询。\n",
"在这里,他们依靠一个简单的相关性过滤来识别一组相关条目,然后根据PageRank对包含查询条件的结果进行排序。\n",
"如今,搜索引擎使用机器学习和用户行为模型来获取网页相关性得分,很多学术会议也致力于这一主题。\n",
"\n",
"\n",
"#### 推荐系统\n",
":label:`subsec_recommender_systems`\n",
"\n",
"另一类与搜索和排名相关的问题是*推荐系统*(recommender system),它的目标是向特定用户进行“个性化”推荐。\n",
"例如,对于电影推荐,科幻迷和喜剧爱好者的推荐结果页面可能会有很大不同。\n",
"类似的应用也会出现在零售产品、音乐和新闻推荐等等。\n",
"\n",
"在某些应用中,客户会提供明确反馈,表达他们对特定产品的喜爱程度。\n",
"例如,亚马逊上的产品评级和评论。\n",
"在其他一些情况下,客户会提供隐性反馈。\n",
"例如,某用户跳过播放列表中的某些歌曲,这可能说明这些歌曲对此用户不大合适。\n",
"总的来说,推荐系统会为“给定用户和物品”的匹配性打分,这个“分数”可能是估计的评级或购买的概率。\n",
"由此,对于任何给定的用户,推荐系统都可以检索得分最高的对象集,然后将其推荐给用户。以上只是简单的算法,而工业生产的推荐系统要先进得多,它会将详细的用户活动和项目特征考虑在内。\n",
"推荐系统算法经过调整,可以捕捉一个人的偏好。\n",
"比如, :numref:`fig_deeplearning_amazon` 是亚马逊基于个性化算法推荐的深度学习书籍,成功地捕捉了作者的喜好。\n",
"\n",
"![亚马逊推荐的深度学习书籍](../img/deeplearning-amazon.jpg)\n",
":label:`fig_deeplearning_amazon`\n",
"\n",
"尽管推荐系统具有巨大的应用价值,但单纯用它作为预测模型仍存在一些缺陷。\n",
"首先,我们的数据只包含“审查后的反馈”:用户更倾向于给他们感觉强烈的事物打分。\n",
"例如,在五分制电影评分中,会有许多五星级和一星级评分,但三星级却明显很少。\n",
"此外,推荐系统有可能形成反馈循环:推荐系统首先会优先推送一个购买量较大(可能被认为更好)的商品,然而目前用户的购买习惯往往是遵循推荐算法,但学习算法并不总是考虑到这一细节,进而更频繁地被推荐。\n",
"综上所述,关于如何处理审查、激励和反馈循环的许多问题,都是重要的开放性研究问题。\n",
"\n",
"#### 序列学习\n",
"\n",
"以上大多数问题都具有固定大小的输入和产生固定大小的输出。\n",
"例如,在预测房价的问题中,我们考虑从一组固定的特征:房屋面积、卧室数量、浴室数量、步行到市中心的时间;\n",
"图像分类问题中,输入为固定尺寸的图像,输出则为固定数量(有关每一个类别)的预测概率;\n",
"在这些情况下,模型只会将输入作为生成输出的“原料”,而不会“记住”输入的具体内容。\n",
"\n",
"如果输入的样本之间没有任何关系,以上模型可能完美无缺。\n",
"但是如果输入是连续的,模型可能就需要拥有“记忆”功能。\n",
"比如,我们该如何处理视频片段呢?\n",
"在这种情况下,每个视频片段可能由不同数量的帧组成。\n",
"通过前一帧的图像,我们可能对后一帧中发生的事情更有把握。\n",
"语言也是如此,机器翻译的输入和输出都为文字序列。\n",
"\n",
"再比如,在医学上序列输入和输出就更为重要。\n",
"设想一下,假设一个模型被用来监控重症监护病人,如果他们在未来24小时内死亡的风险超过某个阈值,这个模型就会发出警报。\n",
"我们绝不希望抛弃过去每小时有关病人病史的所有信息,而仅根据最近的测量结果做出预测。\n",
"\n",
"这些问题是序列学习的实例,是机器学习最令人兴奋的应用之一。\n",
"序列学习需要摄取输入序列或预测输出序列,或两者兼而有之。\n",
"具体来说,输入和输出都是可变长度的序列,例如机器翻译和从语音中转录文本。\n",
"虽然不可能考虑所有类型的序列转换,但以下特殊情况值得一提。\n",
"\n",
"**标记和解析**。这涉及到用属性注释文本序列。\n",
"换句话说,输入和输出的数量基本上是相同的。\n",
"例如,我们可能想知道动词和主语在哪里,或者可能想知道哪些单词是命名实体。\n",
"通常,目标是基于结构和语法假设对文本进行分解和注释,以获得一些注释。\n",
"这听起来比实际情况要复杂得多。\n",
"下面是一个非常简单的示例,它使用“标记”来注释一个句子,该标记指示哪些单词引用命名实体。\n",
"标记为“Ent”,是*实体*entity)的简写。\n",
"\n",
"```text\n",
"Tom has dinner in Washington with Sally\n",
"Ent - - - Ent - Ent\n",
"```\n",
"\n",
"**自动语音识别**。在语音识别中,输入序列是说话人的录音(如 :numref:`fig_speech` 所示),输出序列是说话人所说内容的文本记录。\n",
"它的挑战在于,与文本相比,音频帧多得多(声音通常以8kHz或16kHz采样)。\n",
"也就是说,音频和文本之间没有1:1的对应关系,因为数千个样本可能对应于一个单独的单词。\n",
"这也是“序列到序列”的学习问题,其中输出比输入短得多。\n",
"\n",
"![`-D-e-e-p- L-ea-r-ni-ng-` 在录音中。](../img/speech.png)\n",
":width:`700px`\n",
":label:`fig_speech`\n",
"\n",
"**文本到语音**。这与自动语音识别相反。\n",
"换句话说,输入是文本,输出是音频文件。\n",
"在这种情况下,输出比输入长得多。\n",
"虽然人类很容易识判断发音别扭的音频文件,但这对计算机来说并不是那么简单。\n",
"\n",
"**机器翻译**。\n",
"在语音识别中,输入和输出的出现顺序基本相同。\n",
"而在机器翻译中,颠倒输入和输出的顺序非常重要。\n",
"换句话说,虽然我们仍将一个序列转换成另一个序列,但是输入和输出的数量以及相应序列的顺序大都不会相同。\n",
"比如下面这个例子,“错误的对齐”反应了德国人喜欢把动词放在句尾的特殊倾向。\n",
"\n",
"```text\n",
"德语: Haben Sie sich schon dieses grossartige Lehrwerk angeschaut?\n",
"英语: Did you already check out this excellent tutorial?\n",
"错误的对齐: Did you yourself already this excellent tutorial looked-at?\n",
"```\n",
"\n",
"其他学习任务也有序列学习的应用。\n",
"例如,确定“用户阅读网页的顺序”是二维布局分析问题。\n",
"再比如,对话问题对序列的学习更为复杂:确定下一轮对话,需要考虑对话历史状态以及现实世界的知识......\n",
"如上这些都是热门的序列学习研究领域。\n",
"\n",
"\n",
"### 无监督学习\n",
"\n",
"到目前为止,所有的例子都与监督学习有关,即需要向模型提供巨大数据集:每个样本包含特征和相应标签值。\n",
"打趣一下,“监督学习”模型像一个打工仔,有一份极其专业的工作和一位极其平庸的老板。\n",
"老板站在身后,准确地告诉模型在每种情况下应该做什么,直到模型学会从情况到行动的映射。\n",
"取悦这位老板很容易,只需尽快识别出模式并模仿他们的行为即可。\n",
"\n",
"相反,如果工作没有十分具体的目标,就需要“自发”地去学习了。\n",
"比如,老板可能会给我们一大堆数据,然后要求用它做一些数据科学研究,却没有对结果有要求。\n",
"这类数据中不含有“目标”的机器学习问题通常被为*无监督学习*unsupervised learning),\n",
"本书后面的章节将讨论无监督学习技术。\n",
"那么无监督学习可以回答什么样的问题呢?来看看下面的例子。\n",
"\n",
"* *聚类*clustering)问题:没有标签的情况下,我们是否能给数据分类呢?比如,给定一组照片,我们能把它们分成风景照片、狗、婴儿、猫和山峰的照片吗?同样,给定一组用户的网页浏览记录,我们能否将具有相似行为的用户聚类呢?\n",
"* *主成分分析*principal component analysis)问题:我们能否找到少量的参数来准确地捕捉数据的线性相关属性?比如,一个球的运动轨迹可以用球的速度、直径和质量来描述。再比如,裁缝们已经开发出了一小部分参数,这些参数相当准确地描述了人体的形状,以适应衣服的需要。另一个例子:在欧几里得空间中是否存在一种(任意结构的)对象的表示,使其符号属性能够很好地匹配?这可以用来描述实体及其关系,例如“罗马” $-$ “意大利” $+$ “法国” $=$ “巴黎”。\n",
"* *因果关系*causality)和*概率图模型*probabilistic graphical models)问题:我们能否描述观察到的许多数据的根本原因?例如,如果我们有关于房价、污染、犯罪、地理位置、教育和工资的人口统计数据,我们能否简单地根据经验数据发现它们之间的关系?\n",
"* *生成对抗性网络*generative adversarial networks):为我们提供一种合成数据的方法,甚至像图像和音频这样复杂的非结构化数据。潜在的统计机制是检查真实和虚假数据是否相同的测试,它是无监督学习的另一个重要而令人兴奋的领域。\n",
"\n",
"\n",
"### 与环境互动\n",
"\n",
"有人一直心存疑虑:机器学习的输入(数据)来自哪里?机器学习的输出又将去往何方?\n",
"到目前为止,不管是监督学习还是无监督学习,我们都会预先获取大量数据,然后启动模型,不再与环境交互。\n",
"这里所有学习都是在算法与环境断开后进行的,被称为*离线学习*(offline learning)。\n",
"对于监督学习,从环境中收集数据的过程类似于 :numref:`fig_data_collection`。\n",
"\n",
"![从环境中为监督学习收集数据。](../img/data-collection.svg)\n",
":label:`fig_data_collection`\n",
"\n",
"这种简单的离线学习有它的魅力。\n",
"好的一面是,我们可以孤立地进行模式识别,而不必分心于其他问题。\n",
"但缺点是,解决的问题相当有限。\n",
"这时我们可能会期望人工智能不仅能够做出预测,而且能够与真实环境互动。\n",
"与预测不同,“与真实环境互动”实际上会影响环境。\n",
"这里的人工智能是“智能代理”,而不仅是“预测模型”。\n",
"因此,我们必须考虑到它的行为可能会影响未来的观察结果。\n",
"\n",
"考虑“与真实环境互动”将打开一整套新的建模问题。以下只是几个例子。\n",
"\n",
"* 环境还记得我们以前做过什么吗?\n",
"* 环境是否有助于我们建模?例如,用户将文本读入语音识别器。\n",
"* 环境是否想要打败模型?例如,一个对抗性的设置,如垃圾邮件过滤或玩游戏?\n",
"* 环境是否重要?\n",
"* 环境是否变化?例如,未来的数据是否总是与过去相似,还是随着时间的推移会发生变化?是自然变化还是响应我们的自动化工具而发生变化?\n",
"\n",
"当训练和测试数据不同时,最后一个问题提出了*分布偏移*(distribution shift)的问题。\n",
"接下来的内容将简要描述强化学习问题,这是一类明确考虑与环境交互的问题。\n",
"\n",
"\n",
"### 强化学习\n",
"\n",
"如果你对使用机器学习开发与环境交互并采取行动感兴趣,那么最终可能会专注于*强化学习*reinforcement learning)。\n",
"这可能包括应用到机器人、对话系统,甚至开发视频游戏的人工智能(AI)。\n",
"*深度强化学习*deep reinforcement learning)将深度学习应用于强化学习的问题,是非常热门的研究领域。\n",
"突破性的深度*Q网络*(Q-network)在雅达利游戏中仅使用视觉输入就击败了人类,\n",
"以及 AlphaGo 程序在棋盘游戏围棋中击败了世界冠军,是两个突出强化学习的例子。\n",
"\n",
"在强化学习问题中,智能体(agent)在一系列的时间步骤上与环境交互。\n",
"在每个特定时间点,智能体从环境接收一些*观察*(observation),并且必须选择一个*动作*(action),然后通过某种机制(有时称为执行器)将其传输回环境,最后智能体从环境中获得*奖励*(reward)。\n",
"此后新一轮循环开始,智能体接收后续观察,并选择后续操作,依此类推。\n",
"强化学习的过程在 :numref:`fig_rl-environment` 中进行了说明。\n",
"请注意,强化学习的目标是产生一个好的*策略*(policy)。\n",
"强化学习智能体选择的“动作”受策略控制,即一个从环境观察映射到行动的功能。\n",
"\n",
"![强化学习和环境之间的相互作用](../img/rl-environment.svg)\n",
":label:`fig_rl-environment`\n",
"\n",
"强化学习框架的通用性十分强大。\n",
"例如,我们可以将任何监督学习问题转化为强化学习问题。\n",
"假设我们有一个分类问题,可以创建一个强化学习智能体,每个分类对应一个“动作”。\n",
"然后,我们可以创建一个环境,该环境给予智能体的奖励。\n",
"这个奖励与原始监督学习问题的损失函数是一致的。\n",
"\n",
"当然,强化学习还可以解决许多监督学习无法解决的问题。\n",
"例如,在监督学习中,我们总是希望输入与正确的标签相关联。\n",
"但在强化学习中,我们并不假设环境告诉智能体每个观测的最优动作。\n",
"一般来说,智能体只是得到一些奖励。\n",
"此外,环境甚至可能不会告诉是哪些行为导致了奖励。\n",
"\n",
"以强化学习在国际象棋的应用为例。\n",
"唯一真正的奖励信号出现在游戏结束时:当智能体获胜时,智能体可以得到奖励1;当智能体失败时,智能体将得到奖励-1。\n",
"因此,强化学习者必须处理*学分分配*credit assignment)问题:决定哪些行为是值得奖励的,哪些行为是需要惩罚的。\n",
"就像一个员工升职一样,这次升职很可能反映了前一年的大量的行动。\n",
"要想在未来获得更多的晋升,就需要弄清楚这一过程中哪些行为导致了晋升。\n",
"\n",
"强化学习可能还必须处理部分可观测性问题。\n",
"也就是说,当前的观察结果可能无法阐述有关当前状态的所有信息。\n",
"比方说,一个清洁机器人发现自己被困在一个许多相同的壁橱的房子里。\n",
"推断机器人的精确位置(从而推断其状态),需要在进入壁橱之前考虑它之前的观察结果。\n",
"\n",
"最后,在任何时间点上,强化学习智能体可能知道一个好的策略,但可能有许多更好的策略从未尝试过的。\n",
"强化学习智能体必须不断地做出选择:是应该利用当前最好的策略,还是探索新的策略空间(放弃一些短期回报来换取知识)。\n",
"\n",
"一般的强化学习问题是一个非常普遍的问题。\n",
"智能体的动作会影响后续的观察,而奖励只与所选的动作相对应。\n",
"环境可以是完整观察到的,也可以是部分观察到的,解释所有这些复杂性可能会对研究人员要求太高。\n",
"此外,并不是每个实际问题都表现出所有这些复杂性。\n",
"因此,学者们研究了一些特殊情况下的强化学习问题。\n",
"\n",
"当环境可被完全观察到时,强化学习问题被称为*马尔可夫决策过程*(markov decision process)。\n",
"当状态不依赖于之前的操作时,我们称该问题为*上下文赌博机*(contextual bandit problem)。\n",
"当没有状态,只有一组最初未知回报的可用动作时,这个问题就是经典的*多臂赌博机*(multi-armed bandit problem)。\n",
"\n",
"\n",
"## 起源\n",
"\n",
"为了解决各种各样的机器学习问题,深度学习提供了强大的工具。\n",
"虽然许多深度学习方法都是最近才有重大突破,但使用数据和神经网络编程的核心思想已经研究了几个世纪。\n",
"事实上,人类长期以来就有分析数据和预测未来结果的愿望,而自然科学大部分都植根于此。\n",
"例如,伯努利分布是以[雅各布•伯努利(1654-1705](https://en.wikipedia.org/wiki/Jacob\\uBernoulli)命名的。\n",
"而高斯分布是由[卡尔•弗里德里希•高斯(1777-1855](https://en.wikipedia.org/wiki/Carl_Friedrich_Gauss)发现的,\n",
"他发明了最小均方算法,至今仍用于解决从保险计算到医疗诊断的许多问题。\n",
"这些工具算法催生了自然科学中的一种实验方法——例如,电阻中电流和电压的欧姆定律可以用线性模型完美地描述。\n",
"\n",
"即使在中世纪,数学家对*估计*(estimation)也有敏锐的直觉。\n",
"例如,[雅各布·克贝尔 (1460--1533)](https://www.maa.org/press/periodicals/convergence/mathematical-treasures-jacob-kobels-geometry)的几何学书籍举例说明,通过平均16名成年男性的脚的长度,可以得出一英尺的长度。\n",
"\n",
"![估计一英尺的长度](../img/koebel.jpg)\n",
":width:`500px`\n",
":label:`fig_koebel`\n",
"\n",
":numref:`fig_koebel` 说明了这个估计器是如何工作的。\n",
"16名成年男子被要求脚连脚排成一行。\n",
"然后将它们的总长度除以16,得到现在等于1英尺的估计值。\n",
"这个算法后来被改进以处理畸形的脚——将拥有最短和最长脚的两个人送走,对其余的人取平均值。\n",
"这是最早的修剪均值估计的例子之一。\n",
"\n",
"随着数据的收集和可获得性,统计数据真正实现了腾飞。\n",
"[罗纳德·费舍尔(1890-1962](https://en.wikipedia.org/wiki/Ronald_-Fisher)对统计理论和在遗传学中的应用做出了重大贡献。\n",
"他的许多算法(如线性判别分析)和公式(如费舍尔信息矩阵)至今仍被频繁使用。\n",
"甚至,费舍尔在1936年发布的鸢尾花卉数据集,有时仍然被用来解读机器学习算法。\n",
"他也是优生学的倡导者,这提醒我们:数据科学在道德上存疑的使用,与其在工业和自然科学中的生产性使用一样,有着悠远而持久的历史。\n",
"\n",
"机器学习的第二个影响来自[克劳德·香农(1916--2001)](https://en.wikipedia.org/wiki/Claude_Shannon)的信息论和[艾伦·图灵(1912-1954](https://en.wikipedia.org/wiki/Alan_Turing)的计算理论。\n",
"图灵在他著名的论文《计算机器与智能》 :cite:`Turing.1950` 中提出了“机器能思考吗?”的问题。\n",
"在他所描述的图灵测试中,如果人类评估者很难根据文本互动区分机器和人类的回答,那么机器就可以被认为是“智能的”。\n",
"\n",
"另一个影响可以在神经科学和心理学中找到。\n",
"其中,最古老的算法之一是[唐纳德·赫布 (1904--1985)](https://en.wikipedia.org/wiki/Donald_O._Hebb)开创性的著作《行为的组织》 :cite:`Hebb.Hebb.1949` 。\n",
"他提出神经元通过积极强化学习,是Rosenblatt感知器学习算法的原型,被称为“赫布学习”。\n",
"这个算法也为当今深度学习的许多随机梯度下降算法奠定了基础:强化期望行为和减少不良行为,从而在神经网络中获得良好的参数设置。\n",
"\n",
"*神经网络*neural networks)的得名源于生物灵感。\n",
"一个多世纪以来(追溯到1873年亚历山大·贝恩和1890年詹姆斯·谢林顿的模型),研究人员一直试图组装类似于相互作用的神经元网络的计算电路。\n",
"随着时间的推移,对生物学的解释变得不再肤浅,但这个名字仍然存在。\n",
"其核心是当今大多数网络中都可以找到的几个关键原则:\n",
"\n",
"* 线性和非线性处理单元的交替,通常称为*层*(layers);\n",
"* 使用链式规则(也称为*反向传播*backpropagation))一次性调整网络中的全部参数。\n",
"\n",
"经过最初的快速发展,神经网络的研究从1995年左右开始停滞不前,直到2005年才稍有起色。\n",
"这主要是因为两个原因。\n",
"首先,训练网络(在计算上)非常昂贵。\n",
"在上个世纪末,随机存取存储器(RAM)非常强大,而计算能力却很弱。\n",
"其次,数据集相对较小。\n",
"事实上,费舍尔1932年的鸢尾花卉数据集是测试算法有效性的流行工具,\n",
"而MNIST数据集的60000个手写数字的数据集被认为是巨大的。\n",
"考虑到数据和计算的稀缺性,*核方法*(kernel method)、*决策树*decision tree)和*图模型*graph models)等强大的统计工具(在经验上)证明是更为优越的。\n",
"与神经网络不同的是,这些算法不需要数周的训练,而且有很强的理论依据,可以提供可预测的结果。\n",
"\n",
"\n",
"\n",
"## 深度学习的发展\n",
"\n",
"大约2010年开始,那些在计算上看起来不可行的神经网络算法变得热门起来,实际上是以下两点导致的:\n",
"其一,随着互联网的公司的出现,为数亿在线用户提供服务,大规模数据集变得触手可及;\n",
"另外,廉价又高质量的传感器、廉价的数据存储(克莱德定律)以及廉价计算(摩尔定律)的普及,特别是GPU的普及,使大规模算力唾手可得。\n",
"\n",
"这一点在 :numref:`tab_intro_decade` 中得到了说明。\n",
"\n",
":数据集vs计算机内存和计算能力\n",
"\n",
"| 年代 | 数据规模 | 内存 | 每秒浮点运算 |\n",
"| :--- | :--- | :--- | :--- |\n",
"| 1970 | 100 (鸢尾花卉) | 1 KB | 100 KF (Intel 8080) |\n",
"| 1980 | 1 K (波士顿房价) | 100 KB | 1 MF (Intel 80186) |\n",
"| 1990 | 10 K (光学字符识别) | 10 MB | 10 MF (Intel 80486) |\n",
"| 2000 | 10 M (网页) | 100 MB | 1 GF (Intel Core) |\n",
"| 2010 | 10 G (广告) | 1 GB | 1 TF (Nvidia C2050) |\n",
"| 2020 | 1 T (社交网络) | 100 GB | 1 PF (Nvidia DGX-2) |\n",
":label:`tab_intro_decade`\n",
"\n",
"很明显,随机存取存储器没有跟上数据增长的步伐。\n",
"与此同时,算力的增长速度已经超过了现有数据的增长速度。\n",
"这意味着统计模型需要提高内存效率(这通常是通过添加非线性来实现的),同时由于计算预算的增加,能够花费更多时间来优化这些参数。\n",
"因此,机器学习和统计的关注点从(广义的)线性模型和核方法转移到了深度神经网络。\n",
"这也造就了许多深度学习的中流砥柱,如多层感知机 :cite:`McCulloch.Pitts.1943` 、卷积神经网络 :cite:`LeCun.Bottou.Bengio.ea.1998` 、长短期记忆网络 :cite:`Graves.Schmidhuber.2005` 和Q学习 :cite:`Watkins.Dayan.1992` ,在相对休眠了相当长一段时间之后,在过去十年中被“重新发现”。\n",
"\n",
"最近十年,在统计模型、应用和算法方面的进展就像寒武纪大爆发——历史上物种飞速进化的时期。\n",
"事实上,最先进的技术不仅仅是将可用资源应用于几十年前的算法的结果。\n",
"下面列举了帮助研究人员在过去十年中取得巨大进步的想法(虽然只触及了皮毛)。\n",
"\n",
"\n",
"* 新的容量控制方法,如*dropout* :cite:`Srivastava.Hinton.Krizhevsky.ea.2014`,有助于减轻过拟合的危险。这是通过在整个神经网络中应用噪声注入 :cite:`Bishop.1995` 来实现的,出于训练目的,用随机变量来代替权重。\n",
"* 注意力机制解决了困扰统计学一个多世纪的问题:如何在不增加可学习参数的情况下增加系统的记忆和复杂性。研究人员通过使用只能被视为可学习的指针结构 :cite:`Bahdanau.Cho.Bengio.2014` 找到了一个优雅的解决方案。不需要记住整个文本序列(例如用于固定维度表示中的机器翻译),所有需要存储的都是指向翻译过程的中间状态的指针。这大大提高了长序列的准确性,因为模型在开始生成新序列之前不再需要记住整个序列。\n",
"* 多阶段设计。例如,存储器网络 :cite:`Sukhbaatar.Weston.Fergus.ea.2015` 和神经编程器-解释器 :cite:`Reed.De-Freitas.2015`。它们允许统计建模者描述用于推理的迭代方法。这些工具允许重复修改深度神经网络的内部状态,从而执行推理链中的后续步骤,类似于处理器如何修改用于计算的存储器。\n",
"* 另一个关键的发展是生成对抗网络 :cite:`Goodfellow.Pouget-Abadie.Mirza.ea.2014` 的发明。传统模型中,密度估计和生成模型的统计方法侧重于找到合适的概率分布(通常是近似的)和抽样算法。因此,这些算法在很大程度上受到统计模型固有灵活性的限制。生成式对抗性网络的关键创新是用具有可微参数的任意算法代替采样器。然后对这些数据进行调整,使得鉴别器(实际上是一个双样本测试)不能区分假数据和真实数据。通过使用任意算法生成数据的能力,它为各种技术打开了密度估计的大门。驰骋的斑马 :cite:`Zhu.Park.Isola.ea.2017` 和假名人脸 :cite:`Karras.Aila.Laine.ea.2017` 的例子都证明了这一进展。即使是业余的涂鸦者也可以根据描述场景布局的草图生成照片级真实图像( :cite:`Park.Liu.Wang.ea.2019` )。\n",
"* 在许多情况下,单个GPU不足以处理可用于训练的大量数据。在过去的十年中,构建并行和分布式训练算法的能力有了显著提高。设计可伸缩算法的关键挑战之一是深度学习优化的主力——随机梯度下降,它依赖于相对较小的小批量数据来处理。同时,小批量限制了GPU的效率。因此,在1024个GPU上进行训练,例如每批32个图像的小批量大小相当于总计约32000个图像的小批量。最近的工作,首先是由 :cite:`Li.2017` 完成的,随后是 :cite:`You.Gitman.Ginsburg.2017` 和 :cite:`Jia.Song.He.ea.2018` ,将观察大小提高到64000个,将ResNet-50模型在ImageNet数据集上的训练时间减少到不到7分钟。作为比较——最初的训练时间是按天为单位的。\n",
"* 并行计算的能力也对强化学习的进步做出了相当关键的贡献。这导致了计算机在围棋、雅达里游戏、星际争霸和物理模拟(例如,使用MuJoCo)中实现超人性能的重大进步。有关如何在AlphaGo中实现这一点的说明,请参见如 :cite:`Silver.Huang.Maddison.ea.2016` 。简而言之,如果有大量的(状态、动作、奖励)三元组可用,即只要有可能尝试很多东西来了解它们之间的关系,强化学习就会发挥最好的作用。仿真提供了这样一条途径。\n",
"* 深度学习框架在传播思想方面发挥了至关重要的作用。允许轻松建模的第一代框架包括[Caffe](https://github.com/BVLC/caffe)、[Torch](https://github.com/torch)和[Theano](https://github.com/Theano/Theano)。许多开创性的论文都是用这些工具写的。到目前为止,它们已经被[TensorFlow](https://github.com/tensorflow/tensorflow)(通常通过其高级API [Keras](https://github.com/keras-team/keras)使用)、[CNTK](https://github.com/Microsoft/CNTK)、[Caffe 2](https://github.com/caffe2/caffe2)和[Apache MXNet](https://github.com/apache/incubator-mxnet)所取代。第三代工具,即用于深度学习的命令式工具,可以说是由[Chainer](https://github.com/chainer/chainer)率先推出的,它使用类似于Python NumPy的语法来描述模型。这个想法被[PyTorch](https://github.com/pytorch/pytorch)、MXNet的[Gluon API](https://github.com/apache/incubator-mxnet)和[Jax](https://github.com/google/jax)都采纳了。\n",
"\n",
"“系统研究人员构建更好的工具”和“统计建模人员构建更好的神经网络”之间的分工大大简化了工作。\n",
"例如,在2014年,对卡内基梅隆大学机器学习博士生来说,训练线性回归模型曾经是一个不容易的作业问题。\n",
"而现在,这项任务只需不到10行代码就能完成,这让每个程序员轻易掌握了它。\n",
"\n",
"\n",
"## 深度学习的成功案例\n",
"\n",
"人工智能在交付结果方面有着悠久的历史,它能带来用其他方法很难实现的结果。例如,使用光学字符识别的邮件分拣系统从20世纪90年代开始部署,毕竟,这是著名的手写数字MNIST数据集的来源。这同样适用于阅读银行存款支票和对申请者的信用进行评分。系统会自动检查金融交易是否存在欺诈。这成为许多电子商务支付系统的支柱,如PayPal、Stripe、支付宝、微信、苹果、Visa和万事达卡。国际象棋的计算机程序已经竞争了几十年。机器学习在互联网上提供搜索、推荐、个性化和排名。换句话说,机器学习是无处不在的,尽管它经常隐藏在视线之外。\n",
"\n",
"直到最近,人工智能才成为人们关注的焦点,主要是因为解决了以前被认为难以解决的问题,这些问题与消费者直接相关。许多这样的进步都归功于深度学习。\n",
"\n",
"* 智能助理,如苹果的Siri、亚马逊的Alexa和谷歌助手,都能够相当准确地回答口头问题。这包括一些琐碎的工作,比如打开电灯开关(对残疾人来说是个福音)甚至预约理发师和提供电话支持对话。这可能是人工智能正在影响我们生活的最明显的迹象。\n",
"* 数字助理的一个关键要素是准确识别语音的能力。逐渐地,在某些应用中,此类系统的准确性已经提高到与人类同等水平的程度 :cite:`Xiong.Wu.Alleva.ea.2018`。\n",
"* 物体识别同样也取得了长足的进步。估计图片中的物体在2010年是一项相当具有挑战性的任务。在ImageNet基准上,来自NEC实验室和伊利诺伊大学香槟分校的研究人员获得了28%的Top-5错误率 :cite:`Lin.Lv.Zhu.ea.2010` 。到2017年,这一错误率降低到2.25% :cite:`Hu.Shen.Sun.2018` 。同样,在鉴别鸟类或诊断皮肤癌方面也取得了惊人的成果。\n",
"* 游戏曾经是人类智慧的堡垒。从TD-Gammon开始,一个使用时差强化学习的五子棋游戏程序,算法和计算的进步导致了算法被广泛应用。与五子棋不同的是,国际象棋有一个复杂得多的状态空间和一组动作。深蓝公司利用大规模并行性、专用硬件和高效搜索游戏树 :cite:`Campbell.Hoane-Jr.Hsu.2002` 击败了加里·卡斯帕罗夫(Garry Kasparov)。围棋由于其巨大的状态空间,难度更大。AlphaGo在2015年达到了相当于人类的棋力,使用和蒙特卡洛树抽样 :cite:`Silver.Huang.Maddison.ea.2016` 相结合的深度学习。扑克中的挑战是状态空间很大,而且没有完全观察到(我们不知道对手的牌)。在扑克游戏中,库图斯使用有效的结构化策略超过了人类的表现 :cite:`Brown.Sandholm.2017` 。这说明了游戏取得了令人瞩目的进步以及先进的算法在其中发挥了关键作用的事实。\n",
"* 人工智能进步的另一个迹象是自动驾驶汽车和卡车的出现。虽然完全自主还没有完全触手可及,但在这个方向上已经取得了很好的进展,特斯拉(Tesla)、英伟达(NVIDIA)和Waymo等公司的产品至少实现了部分自主。让完全自主如此具有挑战性的是,正确的驾驶需要感知、推理和将规则纳入系统的能力。目前,深度学习主要应用于这些问题的计算机视觉方面。其余部分则由工程师进行大量调整。\n",
"\n",
"同样,上面的列表仅仅触及了机器学习对实际应用的影响之处的皮毛。\n",
"例如,机器人学、物流、计算生物学、粒子物理学和天文学最近取得的一些突破性进展至少部分归功于机器学习。\n",
"因此,机器学习正在成为工程师和科学家必备的工具。\n",
"\n",
"关于人工智能的非技术性文章中,经常提到人工智能奇点的问题:机器学习系统会变得有知觉,并独立于主人来决定那些直接影响人类生计的事情。\n",
"在某种程度上,人工智能已经直接影响到人类的生计:信誉度的自动评估,车辆的自动驾驶,保释决定的自动准予等等。\n",
"甚至,我们可以让Alexa打开咖啡机。\n",
"\n",
"幸运的是,我们离一个能够控制人类创造者的有知觉的人工智能系统还很远。\n",
"首先,人工智能系统是以一种特定的、面向目标的方式设计、训练和部署的。\n",
"虽然他们的行为可能会给人一种通用智能的错觉,但设计的基础是规则、启发式和统计模型的结合。\n",
"其次,目前还不存在能够自我改进、自我推理、能够在试图解决一般任务的同时,修改、扩展和改进自己的架构的“人工通用智能”工具。\n",
"\n",
"一个更紧迫的问题是人工智能在日常生活中的应用。\n",
"卡车司机和店员完成的许多琐碎的工作很可能也将是自动化的。\n",
"农业机器人可能会降低有机农业的成本,它们也将使收割作业自动化。\n",
"工业革命的这一阶段可能对社会的大部分地区产生深远的影响,因为卡车司机和店员是许多国家最常见的工作之一。\n",
"此外,如果不加注意地应用统计模型,可能会导致种族、性别或年龄偏见,如果自动驱动相应的决策,则会引起对程序公平性的合理关注。\n",
"重要的是要确保小心使用这些算法。\n",
"就我们今天所知,这比恶意超级智能毁灭人类的风险更令人担忧。\n",
"\n",
"## 特点\n",
"\n",
"到目前为止,本节已经广泛地讨论了机器学习,它既是人工智能的一个分支,也是人工智能的一种方法。\n",
"虽然深度学习是机器学习的一个子集,但令人眼花缭乱的算法和应用程序集让人很难评估深度学习的具体成分是什么。\n",
"这就像试图确定披萨所需的配料一样困难,因为几乎每种成分都是可以替代的。\n",
"\n",
"如前所述,机器学习可以使用数据来学习输入和输出之间的转换,例如在语音识别中将音频转换为文本。\n",
"在这样做时,通常需要以适合算法的方式表示数据,以便将这种表示转换为输出。\n",
"深度学习是“深度”的,模型学习了许多“层”的转换,每一层提供一个层次的表示。\n",
"例如,靠近输入的层可以表示数据的低级细节,而接近分类输出的层可以表示用于区分的更抽象的概念。\n",
"由于*表示学习*representation learning)目的是寻找表示本身,因此深度学习可以称为“多级表示学习”。\n",
"\n",
"本节到目前为止讨论的问题,例如从原始音频信号中学习,图像的原始像素值,或者任意长度的句子与外语中的对应句子之间的映射,都是深度学习优于传统机器学习方法的问题。\n",
"事实证明,这些多层模型能够以以前的工具所不能的方式处理低级的感知数据。\n",
"毋庸置疑,深度学习方法中最显著的共同点是使用端到端训练。\n",
"也就是说,与其基于单独调整的组件组装系统,不如构建系统,然后联合调整它们的性能。\n",
"例如,在计算机视觉中,科学家们习惯于将特征工程的过程与建立机器学习模型的过程分开。\n",
"Canny边缘检测器 :cite:`Canny.1987` 和SIFT特征提取器 :cite:`Lowe.2004` 作为将图像映射到特征向量的算法,在过去的十年里占据了至高无上的地位。\n",
"在过去的日子里,将机器学习应用于这些问题的关键部分是提出人工设计的特征工程方法,将数据转换为某种适合于浅层模型的形式。\n",
"然而,与一个算法自动执行的数百万个选择相比,人类通过特征工程所能完成的事情很少。\n",
"当深度学习开始时,这些特征抽取器被自动调整的滤波器所取代,产生了更高的精确度。\n",
"\n",
"因此,深度学习的一个关键优势是它不仅取代了传统学习管道末端的浅层模型,而且还取代了劳动密集型的特征工程过程。\n",
"此外,通过取代大部分特定领域的预处理,深度学习消除了以前分隔计算机视觉、语音识别、自然语言处理、医学信息学和其他应用领域的许多界限,为解决各种问题提供了一套统一的工具。\n",
"\n",
"除了端到端的训练,人们正在经历从参数统计描述到完全非参数模型的转变。\n",
"当数据稀缺时,人们需要依靠简化对现实的假设来获得有用的模型。\n",
"当数据丰富时,可以用更准确地拟合实际情况的非参数模型来代替。\n",
"在某种程度上,这反映了物理学在上个世纪中叶随着计算机的出现所经历的进步。\n",
"现在人们可以借助于相关偏微分方程的数值模拟,而不是用手来求解电子行为的参数近似。这导致了更精确的模型,尽管常常以牺牲可解释性为代价。\n",
"\n",
"与以前工作的另一个不同之处是接受次优解,处理非凸非线性优化问题,并且愿意在证明之前尝试。\n",
"这种在处理统计问题上新发现的经验主义,加上人才的迅速涌入,导致了实用算法的快速进步。\n",
"尽管在许多情况下,这是以修改和重新发明存在了数十年的工具为代价的。\n",
"\n",
"最后,深度学习社区引以为豪的是,他们跨越学术界和企业界共享工具,发布了许多优秀的算法库、统计模型和经过训练的开源神经网络。\n",
"正是本着这种精神,本书免费分发和使用。我们努力降低每个人了解深度学习的门槛,希望读者能从中受益。\n",
"\n",
"## 小结\n",
"\n",
"* 机器学习研究计算机系统如何利用经验(通常是数据)来提高特定任务的性能。它结合了统计学、数据挖掘和优化的思想。通常,它是被用作实现人工智能解决方案的一种手段。\n",
"* 表示学习作为机器学习的一类,其研究的重点是如何自动找到合适的数据表示方式。深度学习是通过学习多层次的转换来进行的多层次的表示学习。\n",
"* 深度学习不仅取代了传统机器学习的浅层模型,而且取代了劳动密集型的特征工程。\n",
"* 最近在深度学习方面取得的许多进展,大都是由廉价传感器和互联网规模应用所产生的大量数据,以及(通过GPU)算力的突破来触发的。\n",
"* 整个系统优化是获得高性能的关键环节。有效的深度学习框架的开源使得这一点的设计和实现变得非常容易。\n",
"\n",
"## 练习\n",
"\n",
"1. 你当前正在编写的代码的哪些部分可以“学习”,即通过学习和自动确定代码中所做的设计选择来改进?你的代码是否包含启发式设计选择?\n",
"1. 你遇到的哪些问题有许多解决它们的样本,但没有具体的自动化方法?这些可能是使用深度学习的主要候选者。\n",
"1. 如果把人工智能的发展看作一场新的工业革命,那么算法和数据之间的关系是什么?它类似于蒸汽机和煤吗?根本区别是什么?\n",
"1. 你还可以在哪里应用端到端的训练方法,比如 :numref:`fig_ml_loop` 、物理、工程和计量经济学?\n",
"\n",
"[Discussions](https://discuss.d2l.ai/t/1744)\n"
]
}
],
"metadata": {
"language_info": {
"name": "python"
},
"required_libs": []
},
"nbformat": 4,
"nbformat_minor": 5
}
File diff suppressed because one or more lines are too long
@@ -0,0 +1,40 @@
{
"cells": [
{
"cell_type": "markdown",
"id": "e65d2ddd",
"metadata": {
"origin_pos": 0
},
"source": [
"# 线性神经网络\n",
":label:`chap_linear`\n",
"\n",
"在介绍深度神经网络之前,我们需要了解神经网络训练的基础知识。\n",
"本章我们将介绍神经网络的整个训练过程,\n",
"包括:定义简单的神经网络架构、数据处理、指定损失函数和如何训练模型。\n",
"为了更容易学习,我们将从经典算法————*线性*神经网络开始,介绍神经网络的基础知识。\n",
"经典统计学习技术中的线性回归和softmax回归可以视为线性神经网络,\n",
"这些知识将为本书其他部分中更复杂的技术奠定基础。\n",
"\n",
":begin_tab:toc\n",
" - [linear-regression](linear-regression.ipynb)\n",
" - [linear-regression-scratch](linear-regression-scratch.ipynb)\n",
" - [linear-regression-concise](linear-regression-concise.ipynb)\n",
" - [softmax-regression](softmax-regression.ipynb)\n",
" - [image-classification-dataset](image-classification-dataset.ipynb)\n",
" - [softmax-regression-scratch](softmax-regression-scratch.ipynb)\n",
" - [softmax-regression-concise](softmax-regression-concise.ipynb)\n",
":end_tab:\n"
]
}
],
"metadata": {
"language_info": {
"name": "python"
},
"required_libs": []
},
"nbformat": 4,
"nbformat_minor": 5
}
@@ -0,0 +1,609 @@
{
"cells": [
{
"cell_type": "markdown",
"id": "3e211967",
"metadata": {
"origin_pos": 0
},
"source": [
"# 线性回归的简洁实现\n",
":label:`sec_linear_concise`\n",
"\n",
"在过去的几年里,出于对深度学习强烈的兴趣,\n",
"许多公司、学者和业余爱好者开发了各种成熟的开源框架。\n",
"这些框架可以自动化基于梯度的学习算法中重复性的工作。\n",
"在 :numref:`sec_linear_scratch`中,我们只运用了:\n",
"(1)通过张量来进行数据存储和线性代数;\n",
"(2)通过自动微分来计算梯度。\n",
"实际上,由于数据迭代器、损失函数、优化器和神经网络层很常用,\n",
"现代深度学习库也为我们实现了这些组件。\n",
"\n",
"本节将介绍如何(**通过使用深度学习框架来简洁地实现**)\n",
" :numref:`sec_linear_scratch`中的(**线性回归模型**)。\n",
"\n",
"## 生成数据集\n",
"\n",
"与 :numref:`sec_linear_scratch`中类似,我们首先[**生成数据集**]。\n"
]
},
{
"cell_type": "code",
"execution_count": 1,
"id": "5c88734d",
"metadata": {
"execution": {
"iopub.execute_input": "2023-08-18T07:01:52.522009Z",
"iopub.status.busy": "2023-08-18T07:01:52.521295Z",
"iopub.status.idle": "2023-08-18T07:01:54.610713Z",
"shell.execute_reply": "2023-08-18T07:01:54.609677Z"
},
"origin_pos": 2,
"tab": [
"pytorch"
]
},
"outputs": [],
"source": [
"import numpy as np\n",
"import torch\n",
"from torch.utils import data\n",
"from d2l import torch as d2l"
]
},
{
"cell_type": "code",
"execution_count": 2,
"id": "c26b741f",
"metadata": {
"execution": {
"iopub.execute_input": "2023-08-18T07:01:54.616404Z",
"iopub.status.busy": "2023-08-18T07:01:54.615685Z",
"iopub.status.idle": "2023-08-18T07:01:54.643472Z",
"shell.execute_reply": "2023-08-18T07:01:54.642512Z"
},
"origin_pos": 5,
"tab": [
"pytorch"
]
},
"outputs": [],
"source": [
"true_w = torch.tensor([2, -3.4])\n",
"true_b = 4.2\n",
"features, labels = d2l.synthetic_data(true_w, true_b, 1000)"
]
},
{
"cell_type": "markdown",
"id": "e6fd8db7",
"metadata": {
"origin_pos": 6
},
"source": [
"## 读取数据集\n",
"\n",
"我们可以[**调用框架中现有的API来读取数据**]。\n",
"我们将`features`和`labels`作为API的参数传递,并通过数据迭代器指定`batch_size`。\n",
"此外,布尔值`is_train`表示是否希望数据迭代器对象在每个迭代周期内打乱数据。\n"
]
},
{
"cell_type": "code",
"execution_count": 3,
"id": "955f5cc0",
"metadata": {
"execution": {
"iopub.execute_input": "2023-08-18T07:01:54.648232Z",
"iopub.status.busy": "2023-08-18T07:01:54.647744Z",
"iopub.status.idle": "2023-08-18T07:01:54.653335Z",
"shell.execute_reply": "2023-08-18T07:01:54.652317Z"
},
"origin_pos": 8,
"tab": [
"pytorch"
]
},
"outputs": [],
"source": [
"def load_array(data_arrays, batch_size, is_train=True): #@save\n",
" \"\"\"构造一个PyTorch数据迭代器\"\"\"\n",
" dataset = data.TensorDataset(*data_arrays)\n",
" return data.DataLoader(dataset, batch_size, shuffle=is_train)"
]
},
{
"cell_type": "code",
"execution_count": 4,
"id": "c041eafa",
"metadata": {
"execution": {
"iopub.execute_input": "2023-08-18T07:01:54.657592Z",
"iopub.status.busy": "2023-08-18T07:01:54.656999Z",
"iopub.status.idle": "2023-08-18T07:01:54.661787Z",
"shell.execute_reply": "2023-08-18T07:01:54.660785Z"
},
"origin_pos": 11,
"tab": [
"pytorch"
]
},
"outputs": [],
"source": [
"batch_size = 10\n",
"data_iter = load_array((features, labels), batch_size)"
]
},
{
"cell_type": "markdown",
"id": "503e6815",
"metadata": {
"origin_pos": 12
},
"source": [
"使用`data_iter`的方式与我们在 :numref:`sec_linear_scratch`中使用`data_iter`函数的方式相同。为了验证是否正常工作,让我们读取并打印第一个小批量样本。\n",
"与 :numref:`sec_linear_scratch`不同,这里我们使用`iter`构造Python迭代器,并使用`next`从迭代器中获取第一项。\n"
]
},
{
"cell_type": "code",
"execution_count": 5,
"id": "7c6919b8",
"metadata": {
"execution": {
"iopub.execute_input": "2023-08-18T07:01:54.665574Z",
"iopub.status.busy": "2023-08-18T07:01:54.664999Z",
"iopub.status.idle": "2023-08-18T07:01:54.673523Z",
"shell.execute_reply": "2023-08-18T07:01:54.672688Z"
},
"origin_pos": 13,
"tab": [
"pytorch"
]
},
"outputs": [
{
"data": {
"text/plain": [
"[tensor([[-1.3116, -0.3062],\n",
" [-1.5653, 0.4830],\n",
" [-0.8893, -0.9466],\n",
" [-1.2417, 1.6891],\n",
" [-0.7148, 0.1376],\n",
" [-0.2162, -0.6122],\n",
" [ 2.4048, -0.3211],\n",
" [-0.1516, 0.4997],\n",
" [ 1.5298, -0.2291],\n",
" [ 1.3895, 1.2602]]),\n",
" tensor([[ 2.6073],\n",
" [-0.5787],\n",
" [ 5.6339],\n",
" [-4.0211],\n",
" [ 2.3117],\n",
" [ 5.8492],\n",
" [10.0926],\n",
" [ 2.1932],\n",
" [ 8.0441],\n",
" [ 2.6943]])]"
]
},
"execution_count": 5,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"next(iter(data_iter))"
]
},
{
"cell_type": "markdown",
"id": "4f57af75",
"metadata": {
"origin_pos": 14
},
"source": [
"## 定义模型\n",
"\n",
"当我们在 :numref:`sec_linear_scratch`中实现线性回归时,\n",
"我们明确定义了模型参数变量,并编写了计算的代码,这样通过基本的线性代数运算得到输出。\n",
"但是,如果模型变得更加复杂,且当我们几乎每天都需要实现模型时,自然会想简化这个过程。\n",
"这种情况类似于为自己的博客从零开始编写网页。\n",
"做一两次是有益的,但如果每个新博客就需要工程师花一个月的时间重新开始编写网页,那并不高效。\n",
"\n",
"对于标准深度学习模型,我们可以[**使用框架的预定义好的层**]。这使我们只需关注使用哪些层来构造模型,而不必关注层的实现细节。\n",
"我们首先定义一个模型变量`net`,它是一个`Sequential`类的实例。\n",
"`Sequential`类将多个层串联在一起。\n",
"当给定输入数据时,`Sequential`实例将数据传入到第一层,\n",
"然后将第一层的输出作为第二层的输入,以此类推。\n",
"在下面的例子中,我们的模型只包含一个层,因此实际上不需要`Sequential`。\n",
"但是由于以后几乎所有的模型都是多层的,在这里使用`Sequential`会让你熟悉“标准的流水线”。\n",
"\n",
"回顾 :numref:`fig_single_neuron`中的单层网络架构,\n",
"这一单层被称为*全连接层*fully-connected layer),\n",
"因为它的每一个输入都通过矩阵-向量乘法得到它的每个输出。\n"
]
},
{
"cell_type": "markdown",
"id": "2b7cb683",
"metadata": {
"origin_pos": 16,
"tab": [
"pytorch"
]
},
"source": [
"在PyTorch中,全连接层在`Linear`类中定义。\n",
"值得注意的是,我们将两个参数传递到`nn.Linear`中。\n",
"第一个指定输入特征形状,即2,第二个指定输出特征形状,输出特征形状为单个标量,因此为1。\n"
]
},
{
"cell_type": "code",
"execution_count": 6,
"id": "85c54a1a",
"metadata": {
"execution": {
"iopub.execute_input": "2023-08-18T07:01:54.677177Z",
"iopub.status.busy": "2023-08-18T07:01:54.676580Z",
"iopub.status.idle": "2023-08-18T07:01:54.680914Z",
"shell.execute_reply": "2023-08-18T07:01:54.680130Z"
},
"origin_pos": 20,
"tab": [
"pytorch"
]
},
"outputs": [],
"source": [
"# nn是神经网络的缩写\n",
"from torch import nn\n",
"\n",
"net = nn.Sequential(nn.Linear(2, 1))"
]
},
{
"cell_type": "markdown",
"id": "fc18b2c1",
"metadata": {
"origin_pos": 23
},
"source": [
"## (**初始化模型参数**)\n",
"\n",
"在使用`net`之前,我们需要初始化模型参数。\n",
"如在线性回归模型中的权重和偏置。\n",
"深度学习框架通常有预定义的方法来初始化参数。\n",
"在这里,我们指定每个权重参数应该从均值为0、标准差为0.01的正态分布中随机采样,\n",
"偏置参数将初始化为零。\n"
]
},
{
"cell_type": "markdown",
"id": "f7452e3b",
"metadata": {
"origin_pos": 25,
"tab": [
"pytorch"
]
},
"source": [
"正如我们在构造`nn.Linear`时指定输入和输出尺寸一样,\n",
"现在我们能直接访问参数以设定它们的初始值。\n",
"我们通过`net[0]`选择网络中的第一个图层,\n",
"然后使用`weight.data`和`bias.data`方法访问参数。\n",
"我们还可以使用替换方法`normal_`和`fill_`来重写参数值。\n"
]
},
{
"cell_type": "code",
"execution_count": 7,
"id": "31716c55",
"metadata": {
"execution": {
"iopub.execute_input": "2023-08-18T07:01:54.684561Z",
"iopub.status.busy": "2023-08-18T07:01:54.684036Z",
"iopub.status.idle": "2023-08-18T07:01:54.690673Z",
"shell.execute_reply": "2023-08-18T07:01:54.689754Z"
},
"origin_pos": 29,
"tab": [
"pytorch"
]
},
"outputs": [
{
"data": {
"text/plain": [
"tensor([0.])"
]
},
"execution_count": 7,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"net[0].weight.data.normal_(0, 0.01)\n",
"net[0].bias.data.fill_(0)"
]
},
{
"cell_type": "markdown",
"id": "94568f78",
"metadata": {
"origin_pos": 33,
"tab": [
"pytorch"
]
},
"source": [
"\n"
]
},
{
"cell_type": "markdown",
"id": "e9592f9a",
"metadata": {
"origin_pos": 35
},
"source": [
"## 定义损失函数\n"
]
},
{
"cell_type": "markdown",
"id": "9a431ee3",
"metadata": {
"origin_pos": 37,
"tab": [
"pytorch"
]
},
"source": [
"[**计算均方误差使用的是`MSELoss`类,也称为平方$L_2$范数**]。\n",
"默认情况下,它返回所有样本损失的平均值。\n"
]
},
{
"cell_type": "code",
"execution_count": 8,
"id": "19a417ac",
"metadata": {
"execution": {
"iopub.execute_input": "2023-08-18T07:01:54.695575Z",
"iopub.status.busy": "2023-08-18T07:01:54.694922Z",
"iopub.status.idle": "2023-08-18T07:01:54.699373Z",
"shell.execute_reply": "2023-08-18T07:01:54.698348Z"
},
"origin_pos": 41,
"tab": [
"pytorch"
]
},
"outputs": [],
"source": [
"loss = nn.MSELoss()"
]
},
{
"cell_type": "markdown",
"id": "30dbe343",
"metadata": {
"origin_pos": 44
},
"source": [
"## 定义优化算法\n"
]
},
{
"cell_type": "markdown",
"id": "2663da90",
"metadata": {
"origin_pos": 46,
"tab": [
"pytorch"
]
},
"source": [
"小批量随机梯度下降算法是一种优化神经网络的标准工具,\n",
"PyTorch在`optim`模块中实现了该算法的许多变种。\n",
"当我们(**实例化一个`SGD`实例**)时,我们要指定优化的参数\n",
"(可通过`net.parameters()`从我们的模型中获得)以及优化算法所需的超参数字典。\n",
"小批量随机梯度下降只需要设置`lr`值,这里设置为0.03。\n"
]
},
{
"cell_type": "code",
"execution_count": 9,
"id": "1ae0989f",
"metadata": {
"execution": {
"iopub.execute_input": "2023-08-18T07:01:54.703905Z",
"iopub.status.busy": "2023-08-18T07:01:54.703368Z",
"iopub.status.idle": "2023-08-18T07:01:54.708081Z",
"shell.execute_reply": "2023-08-18T07:01:54.706987Z"
},
"origin_pos": 50,
"tab": [
"pytorch"
]
},
"outputs": [],
"source": [
"trainer = torch.optim.SGD(net.parameters(), lr=0.03)"
]
},
{
"cell_type": "markdown",
"id": "004056f1",
"metadata": {
"origin_pos": 53
},
"source": [
"## 训练\n",
"\n",
"通过深度学习框架的高级API来实现我们的模型只需要相对较少的代码。\n",
"我们不必单独分配参数、不必定义我们的损失函数,也不必手动实现小批量随机梯度下降。\n",
"当我们需要更复杂的模型时,高级API的优势将大大增加。\n",
"当我们有了所有的基本组件,[**训练过程代码与我们从零开始实现时所做的非常相似**]。\n",
"\n",
"回顾一下:在每个迭代周期里,我们将完整遍历一次数据集(`train_data`),\n",
"不停地从中获取一个小批量的输入和相应的标签。\n",
"对于每一个小批量,我们会进行以下步骤:\n",
"\n",
"* 通过调用`net(X)`生成预测并计算损失`l`(前向传播)。\n",
"* 通过进行反向传播来计算梯度。\n",
"* 通过调用优化器来更新模型参数。\n",
"\n",
"为了更好的衡量训练效果,我们计算每个迭代周期后的损失,并打印它来监控训练过程。\n"
]
},
{
"cell_type": "code",
"execution_count": 10,
"id": "1270d706",
"metadata": {
"execution": {
"iopub.execute_input": "2023-08-18T07:01:54.712705Z",
"iopub.status.busy": "2023-08-18T07:01:54.712113Z",
"iopub.status.idle": "2023-08-18T07:01:54.922720Z",
"shell.execute_reply": "2023-08-18T07:01:54.921580Z"
},
"origin_pos": 55,
"tab": [
"pytorch"
]
},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"epoch 1, loss 0.000248\n",
"epoch 2, loss 0.000103\n",
"epoch 3, loss 0.000103\n"
]
}
],
"source": [
"num_epochs = 3\n",
"for epoch in range(num_epochs):\n",
" for X, y in data_iter:\n",
" l = loss(net(X) ,y)\n",
" trainer.zero_grad()\n",
" l.backward()\n",
" trainer.step()\n",
" l = loss(net(features), labels)\n",
" print(f'epoch {epoch + 1}, loss {l:f}')"
]
},
{
"cell_type": "markdown",
"id": "2f52dea0",
"metadata": {
"origin_pos": 58
},
"source": [
"下面我们[**比较生成数据集的真实参数和通过有限数据训练获得的模型参数**]。\n",
"要访问参数,我们首先从`net`访问所需的层,然后读取该层的权重和偏置。\n",
"正如在从零开始实现中一样,我们估计得到的参数与生成数据的真实参数非常接近。\n"
]
},
{
"cell_type": "code",
"execution_count": 11,
"id": "aa7cef5a",
"metadata": {
"execution": {
"iopub.execute_input": "2023-08-18T07:01:54.927464Z",
"iopub.status.busy": "2023-08-18T07:01:54.927072Z",
"iopub.status.idle": "2023-08-18T07:01:54.935672Z",
"shell.execute_reply": "2023-08-18T07:01:54.934585Z"
},
"origin_pos": 60,
"tab": [
"pytorch"
]
},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"w的估计误差: tensor([-0.0010, -0.0003])\n",
"b的估计误差: tensor([-0.0003])\n"
]
}
],
"source": [
"w = net[0].weight.data\n",
"print('w的估计误差:', true_w - w.reshape(true_w.shape))\n",
"b = net[0].bias.data\n",
"print('b的估计误差:', true_b - b)"
]
},
{
"cell_type": "markdown",
"id": "f62d52d4",
"metadata": {
"origin_pos": 63
},
"source": [
"## 小结\n"
]
},
{
"cell_type": "markdown",
"id": "b6db4aa3",
"metadata": {
"origin_pos": 65,
"tab": [
"pytorch"
]
},
"source": [
"* 我们可以使用PyTorch的高级API更简洁地实现模型。\n",
"* 在PyTorch中,`data`模块提供了数据处理工具,`nn`模块定义了大量的神经网络层和常见损失函数。\n",
"* 我们可以通过`_`结尾的方法将参数替换,从而初始化参数。\n"
]
},
{
"cell_type": "markdown",
"id": "eb6af2c7",
"metadata": {
"origin_pos": 67
},
"source": [
"## 练习\n",
"\n",
"1. 如果将小批量的总损失替换为小批量损失的平均值,需要如何更改学习率?\n",
"1. 查看深度学习框架文档,它们提供了哪些损失函数和初始化方法?用Huber损失代替原损失,即\n",
" $$l(y,y') = \\begin{cases}|y-y'| -\\frac{\\sigma}{2} & \\text{ if } |y-y'| > \\sigma \\\\ \\frac{1}{2 \\sigma} (y-y')^2 & \\text{ 其它情况}\\end{cases}$$\n",
"1. 如何访问线性回归的梯度?\n"
]
},
{
"cell_type": "markdown",
"id": "4e43317d",
"metadata": {
"origin_pos": 69,
"tab": [
"pytorch"
]
},
"source": [
"[Discussions](https://discuss.d2l.ai/t/1781)\n"
]
}
],
"metadata": {
"language_info": {
"name": "python"
},
"required_libs": []
},
"nbformat": 4,
"nbformat_minor": 5
}
File diff suppressed because it is too large Load Diff
File diff suppressed because it is too large Load Diff
File diff suppressed because it is too large Load Diff
File diff suppressed because one or more lines are too long
@@ -0,0 +1,345 @@
{
"cells": [
{
"cell_type": "markdown",
"id": "d794c62a",
"metadata": {
"origin_pos": 0
},
"source": [
"# softmax回归\n",
":label:`sec_softmax`\n",
"\n",
"在 :numref:`sec_linear_regression`中我们介绍了线性回归。\n",
"随后,在 :numref:`sec_linear_scratch`中我们从头实现线性回归。\n",
"然后,在 :numref:`sec_linear_concise`中我们使用深度学习框架的高级API简洁实现线性回归。\n",
"\n",
"回归可以用于预测*多少*的问题。\n",
"比如预测房屋被售出价格,或者棒球队可能获得的胜场数,又或者患者住院的天数。\n",
"\n",
"事实上,我们也对*分类*问题感兴趣:不是问“多少”,而是问“哪一个”:\n",
"\n",
"* 某个电子邮件是否属于垃圾邮件文件夹?\n",
"* 某个用户可能*注册*或*不注册*订阅服务?\n",
"* 某个图像描绘的是驴、狗、猫、还是鸡?\n",
"* 某人接下来最有可能看哪部电影?\n",
"\n",
"通常,机器学习实践者用*分类*这个词来描述两个有微妙差别的问题:\n",
"1. 我们只对样本的“硬性”类别感兴趣,即属于哪个类别;\n",
"2. 我们希望得到“软性”类别,即得到属于每个类别的概率。\n",
"这两者的界限往往很模糊。其中的一个原因是:即使我们只关心硬类别,我们仍然使用软类别的模型。\n",
"\n",
"## 分类问题\n",
":label:`subsec_classification-problem`\n",
"\n",
"我们从一个图像分类问题开始。\n",
"假设每次输入是一个$2\\times2$的灰度图像。\n",
"我们可以用一个标量表示每个像素值,每个图像对应四个特征$x_1, x_2, x_3, x_4$。\n",
"此外,假设每个图像属于类别“猫”“鸡”和“狗”中的一个。\n",
"\n",
"接下来,我们要选择如何表示标签。\n",
"我们有两个明显的选择:最直接的想法是选择$y \\in \\{1, 2, 3\\}$\n",
"其中整数分别代表$\\{\\text{狗}, \\text{猫}, \\text{鸡}\\}$。\n",
"这是在计算机上存储此类信息的有效方法。\n",
"如果类别间有一些自然顺序,\n",
"比如说我们试图预测$\\{\\text{婴儿}, \\text{儿童}, \\text{青少年}, \\text{青年人}, \\text{中年人}, \\text{老年人}\\}$\n",
"那么将这个问题转变为回归问题,并且保留这种格式是有意义的。\n",
"\n",
"但是一般的分类问题并不与类别之间的自然顺序有关。\n",
"幸运的是,统计学家很早以前就发明了一种表示分类数据的简单方法:*独热编码*(one-hot encoding)。\n",
"独热编码是一个向量,它的分量和类别一样多。\n",
"类别对应的分量设置为1,其他所有分量设置为0。\n",
"在我们的例子中,标签$y$将是一个三维向量,\n",
"其中$(1, 0, 0)$对应于“猫”、$(0, 1, 0)$对应于“鸡”、$(0, 0, 1)$对应于“狗”:\n",
"\n",
"$$y \\in \\{(1, 0, 0), (0, 1, 0), (0, 0, 1)\\}.$$\n",
"\n",
"## 网络架构\n",
"\n",
"为了估计所有可能类别的条件概率,我们需要一个有多个输出的模型,每个类别对应一个输出。\n",
"为了解决线性模型的分类问题,我们需要和输出一样多的*仿射函数*(affine function)。\n",
"每个输出对应于它自己的仿射函数。\n",
"在我们的例子中,由于我们有4个特征和3个可能的输出类别,\n",
"我们将需要12个标量来表示权重(带下标的$w$),\n",
"3个标量来表示偏置(带下标的$b$)。\n",
"下面我们为每个输入计算三个*未规范化的预测*(logit):$o_1$、$o_2$和$o_3$。\n",
"\n",
"$$\n",
"\\begin{aligned}\n",
"o_1 &= x_1 w_{11} + x_2 w_{12} + x_3 w_{13} + x_4 w_{14} + b_1,\\\\\n",
"o_2 &= x_1 w_{21} + x_2 w_{22} + x_3 w_{23} + x_4 w_{24} + b_2,\\\\\n",
"o_3 &= x_1 w_{31} + x_2 w_{32} + x_3 w_{33} + x_4 w_{34} + b_3.\n",
"\\end{aligned}\n",
"$$\n",
"\n",
"我们可以用神经网络图 :numref:`fig_softmaxreg`来描述这个计算过程。\n",
"与线性回归一样,softmax回归也是一个单层神经网络。\n",
"由于计算每个输出$o_1$、$o_2$和$o_3$取决于\n",
"所有输入$x_1$、$x_2$、$x_3$和$x_4$\n",
"所以softmax回归的输出层也是全连接层。\n",
"\n",
"![softmax回归是一种单层神经网络](../img/softmaxreg.svg)\n",
":label:`fig_softmaxreg`\n",
"\n",
"为了更简洁地表达模型,我们仍然使用线性代数符号。\n",
"通过向量形式表达为$\\mathbf{o} = \\mathbf{W} \\mathbf{x} + \\mathbf{b}$\n",
"这是一种更适合数学和编写代码的形式。\n",
"由此,我们已经将所有权重放到一个$3 \\times 4$矩阵中。\n",
"对于给定数据样本的特征$\\mathbf{x}$\n",
"我们的输出是由权重与输入特征进行矩阵-向量乘法再加上偏置$\\mathbf{b}$得到的。\n",
"\n",
"## 全连接层的参数开销\n",
":label:`subsec_parameterization-cost-fc-layers`\n",
"\n",
"正如我们将在后续章节中看到的,在深度学习中,全连接层无处不在。\n",
"然而,顾名思义,全连接层是“完全”连接的,可能有很多可学习的参数。\n",
"具体来说,对于任何具有$d$个输入和$q$个输出的全连接层,\n",
"参数开销为$\\mathcal{O}(dq)$,这个数字在实践中可能高得令人望而却步。\n",
"幸运的是,将$d$个输入转换为$q$个输出的成本可以减少到$\\mathcal{O}(\\frac{dq}{n})$\n",
"其中超参数$n$可以由我们灵活指定,以在实际应用中平衡参数节约和模型有效性\n",
" :cite:`Zhang.Tay.Zhang.ea.2021`。\n",
"\n",
"## softmax运算\n",
":label:`subsec_softmax_operation`\n",
"\n",
"现在我们将优化参数以最大化观测数据的概率。\n",
"为了得到预测结果,我们将设置一个阈值,如选择具有最大概率的标签。\n",
"\n",
"我们希望模型的输出$\\hat{y}_j$可以视为属于类$j$的概率,\n",
"然后选择具有最大输出值的类别$\\operatorname*{argmax}_j y_j$作为我们的预测。\n",
"例如,如果$\\hat{y}_1$、$\\hat{y}_2$和$\\hat{y}_3$分别为0.1、0.8和0.1\n",
"那么我们预测的类别是2,在我们的例子中代表“鸡”。\n",
"\n",
"然而我们能否将未规范化的预测$o$直接视作我们感兴趣的输出呢?\n",
"答案是否定的。\n",
"因为将线性层的输出直接视为概率时存在一些问题:\n",
"一方面,我们没有限制这些输出数字的总和为1。\n",
"另一方面,根据输入的不同,它们可以为负值。\n",
"这些违反了 :numref:`sec_prob`中所说的概率基本公理。\n",
"\n",
"要将输出视为概率,我们必须保证在任何数据上的输出都是非负的且总和为1。\n",
"此外,我们需要一个训练的目标函数,来激励模型精准地估计概率。\n",
"例如,\n",
"在分类器输出0.5的所有样本中,我们希望这些样本是刚好有一半实际上属于预测的类别。\n",
"这个属性叫做*校准*calibration)。\n",
"\n",
"社会科学家邓肯·卢斯于1959年在*选择模型*(choice model)的理论基础上\n",
"发明的*softmax函数*正是这样做的:\n",
"softmax函数能够将未规范化的预测变换为非负数并且总和为1,同时让模型保持\n",
"可导的性质。\n",
"为了完成这一目标,我们首先对每个未规范化的预测求幂,这样可以确保输出非负。\n",
"为了确保最终输出的概率值总和为1,我们再让每个求幂后的结果除以它们的总和。如下式:\n",
"\n",
"$$\\hat{\\mathbf{y}} = \\mathrm{softmax}(\\mathbf{o})\\quad \\text{其中}\\quad \\hat{y}_j = \\frac{\\exp(o_j)}{\\sum_k \\exp(o_k)}$$\n",
":eqlabel:`eq_softmax_y_and_o`\n",
"\n",
"这里,对于所有的$j$总有$0 \\leq \\hat{y}_j \\leq 1$。\n",
"因此,$\\hat{\\mathbf{y}}$可以视为一个正确的概率分布。\n",
"softmax运算不会改变未规范化的预测$\\mathbf{o}$之间的大小次序,只会确定分配给每个类别的概率。\n",
"因此,在预测过程中,我们仍然可以用下式来选择最有可能的类别。\n",
"\n",
"$$\n",
"\\operatorname*{argmax}_j \\hat y_j = \\operatorname*{argmax}_j o_j.\n",
"$$\n",
"\n",
"尽管softmax是一个非线性函数,但softmax回归的输出仍然由输入特征的仿射变换决定。\n",
"因此,softmax回归是一个*线性模型*linear model)。\n",
"\n",
"## 小批量样本的矢量化\n",
":label:`subsec_softmax_vectorization`\n",
"\n",
"为了提高计算效率并且充分利用GPU,我们通常会对小批量样本的数据执行矢量计算。\n",
"假设我们读取了一个批量的样本$\\mathbf{X}$\n",
"其中特征维度(输入数量)为$d$,批量大小为$n$。\n",
"此外,假设我们在输出中有$q$个类别。\n",
"那么小批量样本的特征为$\\mathbf{X} \\in \\mathbb{R}^{n \\times d}$\n",
"权重为$\\mathbf{W} \\in \\mathbb{R}^{d \\times q}$\n",
"偏置为$\\mathbf{b} \\in \\mathbb{R}^{1\\times q}$。\n",
"softmax回归的矢量计算表达式为:\n",
"\n",
"$$ \\begin{aligned} \\mathbf{O} &= \\mathbf{X} \\mathbf{W} + \\mathbf{b}, \\\\ \\hat{\\mathbf{Y}} & = \\mathrm{softmax}(\\mathbf{O}). \\end{aligned} $$\n",
":eqlabel:`eq_minibatch_softmax_reg`\n",
"\n",
"相对于一次处理一个样本,\n",
"小批量样本的矢量化加快了$\\mathbf{X}和\\mathbf{W}$的矩阵-向量乘法。\n",
"由于$\\mathbf{X}$中的每一行代表一个数据样本,\n",
"那么softmax运算可以*按行*rowwise)执行:\n",
"对于$\\mathbf{O}$的每一行,我们先对所有项进行幂运算,然后通过求和对它们进行标准化。\n",
"在 :eqref:`eq_minibatch_softmax_reg`中,\n",
"$\\mathbf{X} \\mathbf{W} + \\mathbf{b}$的求和会使用广播机制,\n",
"小批量的未规范化预测$\\mathbf{O}$和输出概率$\\hat{\\mathbf{Y}}$\n",
"都是形状为$n \\times q$的矩阵。\n",
"\n",
"## 损失函数\n",
"\n",
"接下来,我们需要一个损失函数来度量预测的效果。\n",
"我们将使用最大似然估计,这与在线性回归\n",
" :numref:`subsec_normal_distribution_and_squared_loss`\n",
"中的方法相同。\n",
"\n",
"### 对数似然\n",
"\n",
"softmax函数给出了一个向量$\\hat{\\mathbf{y}}$\n",
"我们可以将其视为“对给定任意输入$\\mathbf{x}$的每个类的条件概率”。\n",
"例如,$\\hat{y}_1$=$P(y=\\text{猫} \\mid \\mathbf{x})$。\n",
"假设整个数据集$\\{\\mathbf{X}, \\mathbf{Y}\\}$具有$n$个样本,\n",
"其中索引$i$的样本由特征向量$\\mathbf{x}^{(i)}$和独热标签向量$\\mathbf{y}^{(i)}$组成。\n",
"我们可以将估计值与实际值进行比较:\n",
"\n",
"$$\n",
"P(\\mathbf{Y} \\mid \\mathbf{X}) = \\prod_{i=1}^n P(\\mathbf{y}^{(i)} \\mid \\mathbf{x}^{(i)}).\n",
"$$\n",
"\n",
"根据最大似然估计,我们最大化$P(\\mathbf{Y} \\mid \\mathbf{X})$,相当于最小化负对数似然:\n",
"\n",
"$$\n",
"-\\log P(\\mathbf{Y} \\mid \\mathbf{X}) = \\sum_{i=1}^n -\\log P(\\mathbf{y}^{(i)} \\mid \\mathbf{x}^{(i)})\n",
"= \\sum_{i=1}^n l(\\mathbf{y}^{(i)}, \\hat{\\mathbf{y}}^{(i)}),\n",
"$$\n",
"\n",
"其中,对于任何标签$\\mathbf{y}$和模型预测$\\hat{\\mathbf{y}}$,损失函数为:\n",
"\n",
"$$ l(\\mathbf{y}, \\hat{\\mathbf{y}}) = - \\sum_{j=1}^q y_j \\log \\hat{y}_j. $$\n",
":eqlabel:`eq_l_cross_entropy`\n",
"\n",
"在本节稍后的内容会讲到, :eqref:`eq_l_cross_entropy`中的损失函数\n",
"通常被称为*交叉熵损失*cross-entropy loss)。\n",
"由于$\\mathbf{y}$是一个长度为$q$的独热编码向量,\n",
"所以除了一个项以外的所有项$j$都消失了。\n",
"由于所有$\\hat{y}_j$都是预测的概率,所以它们的对数永远不会大于$0$。\n",
"因此,如果正确地预测实际标签,即如果实际标签$P(\\mathbf{y} \\mid \\mathbf{x})=1$\n",
"则损失函数不能进一步最小化。\n",
"注意,这往往是不可能的。\n",
"例如,数据集中可能存在标签噪声(比如某些样本可能被误标),\n",
"或输入特征没有足够的信息来完美地对每一个样本分类。\n",
"\n",
"### softmax及其导数\n",
":label:`subsec_softmax_and_derivatives`\n",
"\n",
"由于softmax和相关的损失函数很常见,\n",
"因此我们需要更好地理解它的计算方式。\n",
"将 :eqref:`eq_softmax_y_and_o`代入损失 :eqref:`eq_l_cross_entropy`中。\n",
"利用softmax的定义,我们得到:\n",
"\n",
"$$\n",
"\\begin{aligned}\n",
"l(\\mathbf{y}, \\hat{\\mathbf{y}}) &= - \\sum_{j=1}^q y_j \\log \\frac{\\exp(o_j)}{\\sum_{k=1}^q \\exp(o_k)} \\\\\n",
"&= \\sum_{j=1}^q y_j \\log \\sum_{k=1}^q \\exp(o_k) - \\sum_{j=1}^q y_j o_j\\\\\n",
"&= \\log \\sum_{k=1}^q \\exp(o_k) - \\sum_{j=1}^q y_j o_j.\n",
"\\end{aligned}\n",
"$$\n",
"\n",
"考虑相对于任何未规范化的预测$o_j$的导数,我们得到:\n",
"\n",
"$$\n",
"\\partial_{o_j} l(\\mathbf{y}, \\hat{\\mathbf{y}}) = \\frac{\\exp(o_j)}{\\sum_{k=1}^q \\exp(o_k)} - y_j = \\mathrm{softmax}(\\mathbf{o})_j - y_j.\n",
"$$\n",
"\n",
"换句话说,导数是我们softmax模型分配的概率与实际发生的情况(由独热标签向量表示)之间的差异。\n",
"从这个意义上讲,这与我们在回归中看到的非常相似,\n",
"其中梯度是观测值$y$和估计值$\\hat{y}$之间的差异。\n",
"这不是巧合,在任何指数族分布模型中\n",
"(参见[本书附录中关于数学分布的一节](https://d2l.ai/chapter_appendix-mathematics-for-deep-learning/distributions.html)),\n",
"对数似然的梯度正是由此得出的。\n",
"这使梯度计算在实践中变得容易很多。\n",
"\n",
"### 交叉熵损失\n",
"\n",
"现在让我们考虑整个结果分布的情况,即观察到的不仅仅是一个结果。\n",
"对于标签$\\mathbf{y}$,我们可以使用与以前相同的表示形式。\n",
"唯一的区别是,我们现在用一个概率向量表示,如$(0.1, 0.2, 0.7)$\n",
"而不是仅包含二元项的向量$(0, 0, 1)$。\n",
"我们使用 :eqref:`eq_l_cross_entropy`来定义损失$l$\n",
"它是所有标签分布的预期损失值。\n",
"此损失称为*交叉熵损失*cross-entropy loss),它是分类问题最常用的损失之一。\n",
"本节我们将通过介绍信息论基础来理解交叉熵损失。\n",
"如果想了解更多信息论的细节,请进一步参考\n",
"[本书附录中关于信息论的一节](https://d2l.ai/chapter_appendix-mathematics-for-deep-learning/information-theory.html)。\n",
"\n",
"## 信息论基础\n",
":label:`subsec_info_theory_basics`\n",
"\n",
"*信息论*information theory)涉及编码、解码、发送以及尽可能简洁地处理信息或数据。\n",
"\n",
"### 熵\n",
"\n",
"信息论的核心思想是量化数据中的信息内容。\n",
"在信息论中,该数值被称为分布$P$的*熵*(entropy)。可以通过以下方程得到:\n",
"\n",
"$$H[P] = \\sum_j - P(j) \\log P(j).$$\n",
":eqlabel:`eq_softmax_reg_entropy`\n",
"\n",
"信息论的基本定理之一指出,为了对从分布$p$中随机抽取的数据进行编码,\n",
"我们至少需要$H[P]$“纳特(nat)”对其进行编码。\n",
"“纳特”相当于*比特*(bit),但是对数底为$e$而不是2。因此,一个纳特是$\\frac{1}{\\log(2)} \\approx 1.44$比特。\n",
"\n",
"### 信息量\n",
"\n",
"压缩与预测有什么关系呢?\n",
"想象一下,我们有一个要压缩的数据流。\n",
"如果我们很容易预测下一个数据,那么这个数据就很容易压缩。\n",
"为什么呢?\n",
"举一个极端的例子,假如数据流中的每个数据完全相同,这会是一个非常无聊的数据流。\n",
"由于它们总是相同的,我们总是知道下一个数据是什么。\n",
"所以,为了传递数据流的内容,我们不必传输任何信息。也就是说,“下一个数据是xx”这个事件毫无信息量。\n",
"\n",
"但是,如果我们不能完全预测每一个事件,那么我们有时可能会感到\"惊异\"。\n",
"克劳德·香农决定用信息量$\\log \\frac{1}{P(j)} = -\\log P(j)$来量化这种惊异程度。\n",
"在观察一个事件$j$时,并赋予它(主观)概率$P(j)$。\n",
"当我们赋予一个事件较低的概率时,我们的惊异会更大,该事件的信息量也就更大。\n",
"在 :eqref:`eq_softmax_reg_entropy`中定义的熵,\n",
"是当分配的概率真正匹配数据生成过程时的*信息量的期望*。\n",
"\n",
"### 重新审视交叉熵\n",
"\n",
"如果把熵$H(P)$想象为“知道真实概率的人所经历的惊异程度”,那么什么是交叉熵?\n",
"交叉熵*从*$P$*到*$Q$,记为$H(P, Q)$。\n",
"我们可以把交叉熵想象为“主观概率为$Q$的观察者在看到根据概率$P$生成的数据时的预期惊异”。\n",
"当$P=Q$时,交叉熵达到最低。\n",
"在这种情况下,从$P$到$Q$的交叉熵是$H(P, P)= H(P)$。\n",
"\n",
"简而言之,我们可以从两方面来考虑交叉熵分类目标:\n",
"(i)最大化观测数据的似然;(ii)最小化传达标签所需的惊异。\n",
"\n",
"## 模型预测和评估\n",
"\n",
"在训练softmax回归模型后,给出任何样本特征,我们可以预测每个输出类别的概率。\n",
"通常我们使用预测概率最高的类别作为输出类别。\n",
"如果预测与实际类别(标签)一致,则预测是正确的。\n",
"在接下来的实验中,我们将使用*精度*(accuracy)来评估模型的性能。\n",
"精度等于正确预测数与预测总数之间的比率。\n",
"\n",
"## 小结\n",
"\n",
"* softmax运算获取一个向量并将其映射为概率。\n",
"* softmax回归适用于分类问题,它使用了softmax运算中输出类别的概率分布。\n",
"* 交叉熵是一个衡量两个概率分布之间差异的很好的度量,它测量给定模型编码数据所需的比特数。\n",
"\n",
"## 练习\n",
"\n",
"1. 我们可以更深入地探讨指数族与softmax之间的联系。\n",
" 1. 计算softmax交叉熵损失$l(\\mathbf{y},\\hat{\\mathbf{y}})$的二阶导数。\n",
" 1. 计算$\\mathrm{softmax}(\\mathbf{o})$给出的分布方差,并与上面计算的二阶导数匹配。\n",
"1. 假设我们有三个类发生的概率相等,即概率向量是$(\\frac{1}{3}, \\frac{1}{3}, \\frac{1}{3})$。\n",
" 1. 如果我们尝试为它设计二进制代码,有什么问题?\n",
" 1. 请设计一个更好的代码。提示:如果我们尝试编码两个独立的观察结果会发生什么?如果我们联合编码$n$个观测值怎么办?\n",
"1. softmax是对上面介绍的映射的误称(虽然深度学习领域中很多人都使用这个名字)。真正的softmax被定义为$\\mathrm{RealSoftMax}(a, b) = \\log (\\exp(a) + \\exp(b))$。\n",
" 1. 证明$\\mathrm{RealSoftMax}(a, b) > \\mathrm{max}(a, b)$。\n",
" 1. 证明$\\lambda^{-1} \\mathrm{RealSoftMax}(\\lambda a, \\lambda b) > \\mathrm{max}(a, b)$成立,前提是$\\lambda > 0$。\n",
" 1. 证明对于$\\lambda \\to \\infty$,有$\\lambda^{-1} \\mathrm{RealSoftMax}(\\lambda a, \\lambda b) \\to \\mathrm{max}(a, b)$。\n",
" 1. soft-min会是什么样子?\n",
" 1. 将其扩展到两个以上的数字。\n",
"\n",
"[Discussions](https://discuss.d2l.ai/t/1785)\n"
]
}
],
"metadata": {
"language_info": {
"name": "python"
},
"required_libs": []
},
"nbformat": 4,
"nbformat_minor": 5
}
@@ -0,0 +1,204 @@
{
"cells": [
{
"cell_type": "markdown",
"id": "9ed6d9cb",
"metadata": {
"origin_pos": 0
},
"source": [
"# 前向传播、反向传播和计算图\n",
":label:`sec_backprop`\n",
"\n",
"我们已经学习了如何用小批量随机梯度下降训练模型。\n",
"然而当实现该算法时,我们只考虑了通过*前向传播*forward propagation)所涉及的计算。\n",
"在计算梯度时,我们只调用了深度学习框架提供的反向传播函数,而不知其所以然。\n",
"\n",
"梯度的自动计算(自动微分)大大简化了深度学习算法的实现。\n",
"在自动微分之前,即使是对复杂模型的微小调整也需要手工重新计算复杂的导数,\n",
"学术论文也不得不分配大量页面来推导更新规则。\n",
"本节将通过一些基本的数学和计算图,\n",
"深入探讨*反向传播*的细节。\n",
"首先,我们将重点放在带权重衰减($L_2$正则化)的单隐藏层多层感知机上。\n",
"\n",
"## 前向传播\n",
"\n",
"*前向传播*forward propagation或forward pass\n",
"指的是:按顺序(从输入层到输出层)计算和存储神经网络中每层的结果。\n",
"\n",
"我们将一步步研究单隐藏层神经网络的机制,\n",
"为了简单起见,我们假设输入样本是 $\\mathbf{x}\\in \\mathbb{R}^d$\n",
"并且我们的隐藏层不包括偏置项。\n",
"这里的中间变量是:\n",
"\n",
"$$\\mathbf{z}= \\mathbf{W}^{(1)} \\mathbf{x},$$\n",
"\n",
"其中$\\mathbf{W}^{(1)} \\in \\mathbb{R}^{h \\times d}$\n",
"是隐藏层的权重参数。\n",
"将中间变量$\\mathbf{z}\\in \\mathbb{R}^h$通过激活函数$\\phi$后,\n",
"我们得到长度为$h$的隐藏激活向量:\n",
"\n",
"$$\\mathbf{h}= \\phi (\\mathbf{z}).$$\n",
"\n",
"隐藏变量$\\mathbf{h}$也是一个中间变量。\n",
"假设输出层的参数只有权重$\\mathbf{W}^{(2)} \\in \\mathbb{R}^{q \\times h}$\n",
"我们可以得到输出层变量,它是一个长度为$q$的向量:\n",
"\n",
"$$\\mathbf{o}= \\mathbf{W}^{(2)} \\mathbf{h}.$$\n",
"\n",
"假设损失函数为$l$,样本标签为$y$,我们可以计算单个数据样本的损失项,\n",
"\n",
"$$L = l(\\mathbf{o}, y).$$\n",
"\n",
"根据$L_2$正则化的定义,给定超参数$\\lambda$,正则化项为\n",
"\n",
"$$s = \\frac{\\lambda}{2} \\left(\\|\\mathbf{W}^{(1)}\\|_F^2 + \\|\\mathbf{W}^{(2)}\\|_F^2\\right),$$\n",
":eqlabel:`eq_forward-s`\n",
"\n",
"其中矩阵的Frobenius范数是将矩阵展平为向量后应用的$L_2$范数。\n",
"最后,模型在给定数据样本上的正则化损失为:\n",
"\n",
"$$J = L + s.$$\n",
"\n",
"在下面的讨论中,我们将$J$称为*目标函数*objective function)。\n",
"\n",
"## 前向传播计算图\n",
"\n",
"绘制*计算图*有助于我们可视化计算中操作符和变量的依赖关系。\n",
" :numref:`fig_forward` 是与上述简单网络相对应的计算图,\n",
" 其中正方形表示变量,圆圈表示操作符。\n",
" 左下角表示输入,右上角表示输出。\n",
" 注意显示数据流的箭头方向主要是向右和向上的。\n",
"\n",
"![前向传播的计算图](../img/forward.svg)\n",
":label:`fig_forward`\n",
"\n",
"## 反向传播\n",
"\n",
"*反向传播*backward propagation或backpropagation)指的是计算神经网络参数梯度的方法。\n",
"简言之,该方法根据微积分中的*链式规则*,按相反的顺序从输出层到输入层遍历网络。\n",
"该算法存储了计算某些参数梯度时所需的任何中间变量(偏导数)。\n",
"假设我们有函数$\\mathsf{Y}=f(\\mathsf{X})$和$\\mathsf{Z}=g(\\mathsf{Y})$\n",
"其中输入和输出$\\mathsf{X}, \\mathsf{Y}, \\mathsf{Z}$是任意形状的张量。\n",
"利用链式法则,我们可以计算$\\mathsf{Z}$关于$\\mathsf{X}$的导数\n",
"\n",
"$$\\frac{\\partial \\mathsf{Z}}{\\partial \\mathsf{X}} = \\text{prod}\\left(\\frac{\\partial \\mathsf{Z}}{\\partial \\mathsf{Y}}, \\frac{\\partial \\mathsf{Y}}{\\partial \\mathsf{X}}\\right).$$\n",
"\n",
"在这里,我们使用$\\text{prod}$运算符在执行必要的操作(如换位和交换输入位置)后将其参数相乘。\n",
"对于向量,这很简单,它只是矩阵-矩阵乘法。\n",
"对于高维张量,我们使用适当的对应项。\n",
"运算符$\\text{prod}$指代了所有的这些符号。\n",
"\n",
"回想一下,在计算图 :numref:`fig_forward`中的单隐藏层简单网络的参数是\n",
"$\\mathbf{W}^{(1)}$和$\\mathbf{W}^{(2)}$。\n",
"反向传播的目的是计算梯度$\\partial J/\\partial \\mathbf{W}^{(1)}$和\n",
"$\\partial J/\\partial \\mathbf{W}^{(2)}$。\n",
"为此,我们应用链式法则,依次计算每个中间变量和参数的梯度。\n",
"计算的顺序与前向传播中执行的顺序相反,因为我们需要从计算图的结果开始,并朝着参数的方向努力。第一步是计算目标函数$J=L+s$相对于损失项$L$和正则项$s$的梯度。\n",
"\n",
"$$\\frac{\\partial J}{\\partial L} = 1 \\; \\text{and} \\; \\frac{\\partial J}{\\partial s} = 1.$$\n",
"\n",
"接下来,我们根据链式法则计算目标函数关于输出层变量$\\mathbf{o}$的梯度:\n",
"\n",
"$$\n",
"\\frac{\\partial J}{\\partial \\mathbf{o}}\n",
"= \\text{prod}\\left(\\frac{\\partial J}{\\partial L}, \\frac{\\partial L}{\\partial \\mathbf{o}}\\right)\n",
"= \\frac{\\partial L}{\\partial \\mathbf{o}}\n",
"\\in \\mathbb{R}^q.\n",
"$$\n",
"\n",
"接下来,我们计算正则化项相对于两个参数的梯度:\n",
"\n",
"$$\\frac{\\partial s}{\\partial \\mathbf{W}^{(1)}} = \\lambda \\mathbf{W}^{(1)}\n",
"\\; \\text{and} \\;\n",
"\\frac{\\partial s}{\\partial \\mathbf{W}^{(2)}} = \\lambda \\mathbf{W}^{(2)}.$$\n",
"\n",
"现在我们可以计算最接近输出层的模型参数的梯度\n",
"$\\partial J/\\partial \\mathbf{W}^{(2)} \\in \\mathbb{R}^{q \\times h}$。\n",
"使用链式法则得出:\n",
"\n",
"$$\\frac{\\partial J}{\\partial \\mathbf{W}^{(2)}}= \\text{prod}\\left(\\frac{\\partial J}{\\partial \\mathbf{o}}, \\frac{\\partial \\mathbf{o}}{\\partial \\mathbf{W}^{(2)}}\\right) + \\text{prod}\\left(\\frac{\\partial J}{\\partial s}, \\frac{\\partial s}{\\partial \\mathbf{W}^{(2)}}\\right)= \\frac{\\partial J}{\\partial \\mathbf{o}} \\mathbf{h}^\\top + \\lambda \\mathbf{W}^{(2)}.$$\n",
":eqlabel:`eq_backprop-J-h`\n",
"\n",
"为了获得关于$\\mathbf{W}^{(1)}$的梯度,我们需要继续沿着输出层到隐藏层反向传播。\n",
"关于隐藏层输出的梯度$\\partial J/\\partial \\mathbf{h} \\in \\mathbb{R}^h$由下式给出:\n",
"\n",
"$$\n",
"\\frac{\\partial J}{\\partial \\mathbf{h}}\n",
"= \\text{prod}\\left(\\frac{\\partial J}{\\partial \\mathbf{o}}, \\frac{\\partial \\mathbf{o}}{\\partial \\mathbf{h}}\\right)\n",
"= {\\mathbf{W}^{(2)}}^\\top \\frac{\\partial J}{\\partial \\mathbf{o}}.\n",
"$$\n",
"\n",
"由于激活函数$\\phi$是按元素计算的,\n",
"计算中间变量$\\mathbf{z}$的梯度$\\partial J/\\partial \\mathbf{z} \\in \\mathbb{R}^h$\n",
"需要使用按元素乘法运算符,我们用$\\odot$表示:\n",
"\n",
"$$\n",
"\\frac{\\partial J}{\\partial \\mathbf{z}}\n",
"= \\text{prod}\\left(\\frac{\\partial J}{\\partial \\mathbf{h}}, \\frac{\\partial \\mathbf{h}}{\\partial \\mathbf{z}}\\right)\n",
"= \\frac{\\partial J}{\\partial \\mathbf{h}} \\odot \\phi'\\left(\\mathbf{z}\\right).\n",
"$$\n",
"\n",
"最后,我们可以得到最接近输入层的模型参数的梯度\n",
"$\\partial J/\\partial \\mathbf{W}^{(1)} \\in \\mathbb{R}^{h \\times d}$。\n",
"根据链式法则,我们得到:\n",
"\n",
"$$\n",
"\\frac{\\partial J}{\\partial \\mathbf{W}^{(1)}}\n",
"= \\text{prod}\\left(\\frac{\\partial J}{\\partial \\mathbf{z}}, \\frac{\\partial \\mathbf{z}}{\\partial \\mathbf{W}^{(1)}}\\right) + \\text{prod}\\left(\\frac{\\partial J}{\\partial s}, \\frac{\\partial s}{\\partial \\mathbf{W}^{(1)}}\\right)\n",
"= \\frac{\\partial J}{\\partial \\mathbf{z}} \\mathbf{x}^\\top + \\lambda \\mathbf{W}^{(1)}.\n",
"$$\n",
"\n",
"## 训练神经网络\n",
"\n",
"在训练神经网络时,前向传播和反向传播相互依赖。\n",
"对于前向传播,我们沿着依赖的方向遍历计算图并计算其路径上的所有变量。\n",
"然后将这些用于反向传播,其中计算顺序与计算图的相反。\n",
"\n",
"以上述简单网络为例:一方面,在前向传播期间计算正则项\n",
" :eqref:`eq_forward-s`取决于模型参数$\\mathbf{W}^{(1)}$和\n",
"$\\mathbf{W}^{(2)}$的当前值。\n",
"它们是由优化算法根据最近迭代的反向传播给出的。\n",
"另一方面,反向传播期间参数 :eqref:`eq_backprop-J-h`的梯度计算,\n",
"取决于由前向传播给出的隐藏变量$\\mathbf{h}$的当前值。\n",
"\n",
"因此,在训练神经网络时,在初始化模型参数后,\n",
"我们交替使用前向传播和反向传播,利用反向传播给出的梯度来更新模型参数。\n",
"注意,反向传播重复利用前向传播中存储的中间值,以避免重复计算。\n",
"带来的影响之一是我们需要保留中间值,直到反向传播完成。\n",
"这也是训练比单纯的预测需要更多的内存(显存)的原因之一。\n",
"此外,这些中间值的大小与网络层的数量和批量的大小大致成正比。\n",
"因此,使用更大的批量来训练更深层次的网络更容易导致*内存不足*(out of memory)错误。\n",
"\n",
"## 小结\n",
"\n",
"* 前向传播在神经网络定义的计算图中按顺序计算和存储中间变量,它的顺序是从输入层到输出层。\n",
"* 反向传播按相反的顺序(从输出层到输入层)计算和存储神经网络的中间变量和参数的梯度。\n",
"* 在训练深度学习模型时,前向传播和反向传播是相互依赖的。\n",
"* 训练比预测需要更多的内存。\n",
"\n",
"## 练习\n",
"\n",
"1. 假设一些标量函数$\\mathbf{X}$的输入$\\mathbf{X}$是$n \\times m$矩阵。$f$相对于$\\mathbf{X}$的梯度维数是多少?\n",
"1. 向本节中描述的模型的隐藏层添加偏置项(不需要在正则化项中包含偏置项)。\n",
" 1. 画出相应的计算图。\n",
" 1. 推导正向和反向传播方程。\n",
"1. 计算本节所描述的模型,用于训练和预测的内存占用。\n",
"1. 假设想计算二阶导数。计算图发生了什么?预计计算需要多长时间?\n",
"1. 假设计算图对当前拥有的GPU来说太大了。\n",
" 1. 请试着把它划分到多个GPU上。\n",
" 1. 与小批量训练相比,有哪些优点和缺点?\n",
"\n",
"[Discussions](https://discuss.d2l.ai/t/5769)\n"
]
}
],
"metadata": {
"language_info": {
"name": "python"
},
"required_libs": []
},
"nbformat": 4,
"nbformat_minor": 5
}
File diff suppressed because it is too large Load Diff
@@ -0,0 +1,498 @@
{
"cells": [
{
"cell_type": "markdown",
"id": "1658580a",
"metadata": {
"origin_pos": 0
},
"source": [
"# 环境和分布偏移\n",
"\n",
"前面我们学习了许多机器学习的实际应用,将模型拟合各种数据集。\n",
"然而,我们从来没有想过数据最初从哪里来?以及我们计划最终如何处理模型的输出?\n",
"通常情况下,开发人员会拥有一些数据且急于开发模型,而不关注这些基本问题。\n",
"\n",
"许多失败的机器学习部署(即实际应用)都可以追究到这种方式。\n",
"有时,根据测试集的精度衡量,模型表现得非常出色。\n",
"但是当数据分布突然改变时,模型在部署中会出现灾难性的失败。\n",
"更隐蔽的是,有时模型的部署本身就是扰乱数据分布的催化剂。\n",
"举一个有点荒谬却可能真实存在的例子。\n",
"假设我们训练了一个贷款申请人违约风险模型,用来预测谁将偿还贷款或违约。\n",
"这个模型发现申请人的鞋子与违约风险相关(穿牛津鞋申请人会偿还,穿运动鞋申请人会违约)。\n",
"此后,这个模型可能倾向于向所有穿着牛津鞋的申请人发放贷款,并拒绝所有穿着运动鞋的申请人。\n",
"\n",
"这种情况可能会带来灾难性的后果。\n",
"首先,一旦模型开始根据鞋类做出决定,顾客就会理解并改变他们的行为。\n",
"不久,所有的申请者都会穿牛津鞋,而信用度却没有相应的提高。\n",
"总而言之,机器学习的许多应用中都存在类似的问题:\n",
"通过将基于模型的决策引入环境,我们可能会破坏模型。\n",
"\n",
"虽然我们不可能在一节中讨论全部的问题,但我们希望揭示一些常见的问题,\n",
"并激发批判性思考,以便及早发现这些情况,减轻灾难性的损害。\n",
"有些解决方案很简单(要求“正确”的数据),有些在技术上很困难(实施强化学习系统),\n",
"还有一些解决方案要求我们完全跳出统计预测,解决一些棘手的、与算法伦理应用有关的哲学问题。\n",
"\n",
"## 分布偏移的类型\n",
"\n",
"首先,我们考虑数据分布可能发生变化的各种方式,以及为挽救模型性能可能采取的措施。\n",
"在一个经典的情景中,假设训练数据是从某个分布$p_S(\\mathbf{x},y)$中采样的,\n",
"但是测试数据将包含从不同分布$p_T(\\mathbf{x},y)$中抽取的未标记样本。\n",
"一个清醒的现实是:如果没有任何关于$p_S$和$p_T$之间相互关系的假设,\n",
"学习到一个分类器是不可能的。\n",
"\n",
"考虑一个二元分类问题:区分狗和猫。\n",
"如果分布可以以任意方式偏移,那么我们的情景允许病态的情况,\n",
"即输入的分布保持不变:$p_S(\\mathbf{x}) = p_T(\\mathbf{x})$\n",
"但标签全部翻转:$p_S(y | \\mathbf{x}) = 1 - p_T(y | \\mathbf{x})$。\n",
"换言之,如果将来所有的“猫”现在都是狗,而我们以前所说的“狗”现在是猫。\n",
"而此时输入$p(\\mathbf{x})$的分布没有任何改变,\n",
"那么我们就不可能将这种情景与分布完全没有变化的情景区分开。\n",
"\n",
"幸运的是,在对未来我们的数据可能发生变化的一些限制性假设下,\n",
"有些算法可以检测这种偏移,甚至可以动态调整,提高原始分类器的精度。\n",
"\n",
"### 协变量偏移\n",
"\n",
"在不同分布偏移中,协变量偏移可能是最为广泛研究的。\n",
"这里我们假设:虽然输入的分布可能随时间而改变,\n",
"但标签函数(即条件分布$P(y \\mid \\mathbf{x})$)没有改变。\n",
"统计学家称之为*协变量偏移*covariate shift),\n",
"因为这个问题是由于协变量(特征)分布的变化而产生的。\n",
"虽然有时我们可以在不引用因果关系的情况下对分布偏移进行推断,\n",
"但在我们认为$\\mathbf{x}$导致$y$的情况下,协变量偏移是一种自然假设。\n",
"\n",
"考虑一下区分猫和狗的问题:训练数据包括 :numref:`fig_cat-dog-train`中的图像。\n",
"\n",
"![区分猫和狗的训练数据](../img/cat-dog-train.svg)\n",
":label:`fig_cat-dog-train`\n",
"\n",
"在测试时,我们被要求对 :numref:`fig_cat-dog-test`中的图像进行分类。\n",
"\n",
"![区分猫和狗的测试数据](../img/cat-dog-test.svg)\n",
":label:`fig_cat-dog-test`\n",
"\n",
"训练集由真实照片组成,而测试集只包含卡通图片。\n",
"假设在一个与测试集的特征有着本质不同的数据集上进行训练,\n",
"如果没有方法来适应新的领域,可能会有麻烦。\n",
"\n",
"### 标签偏移\n",
"\n",
"*标签偏移*label shift)描述了与协变量偏移相反的问题。\n",
"这里我们假设标签边缘概率$P(y)$可以改变,\n",
"但是类别条件分布$P(\\mathbf{x} \\mid y)$在不同的领域之间保持不变。\n",
"当我们认为$y$导致$\\mathbf{x}$时,标签偏移是一个合理的假设。\n",
"例如,预测患者的疾病,我们可能根据症状来判断,\n",
"即使疾病的相对流行率随着时间的推移而变化。\n",
"标签偏移在这里是恰当的假设,因为疾病会引起症状。\n",
"在另一些情况下,标签偏移和协变量偏移假设可以同时成立。\n",
"例如,当标签是确定的,即使$y$导致$\\mathbf{x}$,协变量偏移假设也会得到满足。\n",
"有趣的是,在这些情况下,使用基于标签偏移假设的方法通常是有利的。\n",
"这是因为这些方法倾向于包含看起来像标签(通常是低维)的对象,\n",
"而不是像输入(通常是高维的)对象。\n",
"\n",
"### 概念偏移\n",
"\n",
"我们也可能会遇到*概念偏移*concept shift):\n",
"当标签的定义发生变化时,就会出现这种问题。\n",
"这听起来很奇怪——一只猫就是一只猫,不是吗?\n",
"然而,其他类别会随着不同时间的用法而发生变化。\n",
"精神疾病的诊断标准、所谓的时髦、以及工作头衔等等,都是概念偏移的日常映射。\n",
"事实证明,假如我们环游美国,根据所在的地理位置改变我们的数据来源,\n",
"我们会发现关于“软饮”名称的分布发生了相当大的概念偏移,\n",
"如 :numref:`fig_popvssoda` 所示。\n",
"\n",
"![美国软饮名称的概念偏移](../img/popvssoda.png)\n",
":width:`400px`\n",
":label:`fig_popvssoda`\n",
"\n",
"如果我们要建立一个机器翻译系统,\n",
"$P(y \\mid \\mathbf{x})$的分布可能会因我们的位置不同而得到不同的翻译。\n",
"这个问题可能很难被发现。\n",
"所以,我们最好可以利用在时间或空间上逐渐发生偏移的知识。\n",
"\n",
"## 分布偏移示例\n",
"\n",
"在深入研究形式体系和算法之前,我们可以讨论一些协变量偏移或概念偏移可能并不明显的具体情况。\n",
"\n",
"### 医学诊断\n",
"\n",
"假设我们想设计一个检测癌症的算法,从健康人和病人那里收集数据,然后训练算法。\n",
"它工作得很好,有很高的精度,然后我们得出了已经准备好在医疗诊断上取得成功的结论。\n",
"请先别着急。\n",
"\n",
"收集训练数据的分布和在实际中遇到的数据分布可能有很大的不同。\n",
"这件事在一个不幸的初创公司身上发生过,我们中的一些作者几年前和他们合作过。\n",
"他们正在研究一种血液检测方法,主要针对一种影响老年男性的疾病,\n",
"并希望利用他们从病人身上采集的血液样本进行研究。\n",
"然而,从健康男性身上获取血样比从系统中已有的病人身上获取要困难得多。\n",
"作为补偿,这家初创公司向一所大学校园内的学生征集献血,作为开发测试的健康对照样本。\n",
"然后这家初创公司问我们是否可以帮助他们建立一个用于检测疾病的分类器。\n",
"\n",
"正如我们向他们解释的那样,用近乎完美的精度来区分健康和患病人群确实很容易。\n",
"然而,这可能是因为受试者在年龄、激素水平、体力活动、\n",
"饮食、饮酒以及其他许多与疾病无关的因素上存在差异。\n",
"这对检测疾病的分类器可能并不适用。\n",
"这些抽样可能会遇到极端的协变量偏移。\n",
"此外,这种情况不太可能通过常规方法加以纠正。\n",
"简言之,他们浪费了一大笔钱。\n",
"\n",
"### 自动驾驶汽车\n",
"\n",
"对于一家想利用机器学习来开发自动驾驶汽车的公司,一个关键部件是“路沿检测器”。\n",
"由于真实的注释数据获取成本很高,他们想出了一个“聪明”的想法:\n",
"将游戏渲染引擎中的合成数据用作额外的训练数据。\n",
"这对从渲染引擎中抽取的“测试数据”非常有效,但应用在一辆真正的汽车里真是一场灾难。\n",
"正如事实证明的那样,路沿被渲染成一种非常简单的纹理。\n",
"更重要的是,所有的路沿都被渲染成了相同的纹理,路沿检测器很快就学习到了这个“特征”。\n",
"\n",
"当美军第一次试图在森林中探测坦克时,也发生了类似的事情。\n",
"他们在没有坦克的情况下拍摄了森林的航拍照片,然后把坦克开进森林,拍摄了另一组照片。\n",
"使用这两组数据训练的分类器似乎工作得很好。\n",
"不幸的是,分类器仅仅学会了如何区分有阴影的树和没有阴影的树:\n",
"第一组照片是在清晨拍摄的,而第二组是在中午拍摄的。\n",
"\n",
"### 非平稳分布\n",
"\n",
"当分布变化缓慢并且模型没有得到充分更新时,就会出现更微妙的情况:\n",
"*非平稳分布*nonstationary distribution)。\n",
"以下是一些典型例子:\n",
"\n",
"* 训练一个计算广告模型,但却没有经常更新(例如,一个2009年训练的模型不知道一个叫iPad的不知名新设备刚刚上市);\n",
"* 建立一个垃圾邮件过滤器,它能很好地检测到所有垃圾邮件。但是,垃圾邮件发送者们变得聪明起来,制造出新的信息,看起来不像我们以前见过的任何垃圾邮件;\n",
"* 建立一个产品推荐系统,它在整个冬天都有效,但圣诞节过后很久还会继续推荐圣诞帽。\n",
"\n",
"### 更多轶事\n",
"\n",
"* 建立一个人脸检测器,它在所有基准测试中都能很好地工作,但是它在测试数据上失败了:有问题的例子是人脸充满了整个图像的特写镜头(训练集中没有这样的数据)。\n",
"* 为美国市场建立了一个网络搜索引擎,并希望将其部署到英国。\n",
"* 通过在一个大的数据集来训练图像分类器,其中每一个大类的数量在数据集近乎是平均的,比如1000个类别,每个类别由1000个图像表示。但是将该系统部署到真实世界中,照片的实际标签分布显然是不均匀的。\n",
"\n",
"## 分布偏移纠正\n",
"\n",
"正如我们所讨论的,在许多情况下训练和测试分布$P(\\mathbf{x}, y)$是不同的。\n",
"在一些情况下,我们很幸运,不管协变量、标签或概念如何发生偏移,模型都能正常工作。\n",
"在另一些情况下,我们可以通过运用策略来应对这种偏移,从而做得更好。\n",
"本节的其余部分将着重于应对这种偏移的技术细节。\n",
"\n",
"### 经验风险与实际风险\n",
":label:`subsec_empirical-risk-and-risk`\n",
"\n",
"首先我们反思一下在模型训练期间到底发生了什么?\n",
"训练数据$\\{(\\mathbf{x}_1, y_1), \\ldots, (\\mathbf{x}_n, y_n)\\}$\n",
"的特征和相关的标签经过迭代,在每一个小批量之后更新模型$f$的参数。\n",
"为了简单起见,我们不考虑正则化,因此极大地降低了训练损失:\n",
"\n",
"$$\\mathop{\\mathrm{minimize}}_f \\frac{1}{n} \\sum_{i=1}^n l(f(\\mathbf{x}_i), y_i),$$\n",
":eqlabel:`eq_empirical-risk-min`\n",
"\n",
"其中$l$是损失函数,用来度量:\n",
"给定标签$y_i$,预测$f(\\mathbf{x}_i)$的“糟糕程度”。\n",
"统计学家称 :eqref:`eq_empirical-risk-min`中的这一项为经验风险。\n",
"*经验风险*empirical risk)是为了近似 *真实风险*true risk),\n",
"整个训练数据上的平均损失,即从其真实分布$p(\\mathbf{x},y)$中\n",
"抽取的所有数据的总体损失的期望值:\n",
"\n",
"$$E_{p(\\mathbf{x}, y)} [l(f(\\mathbf{x}), y)] = \\int\\int l(f(\\mathbf{x}), y) p(\\mathbf{x}, y) \\;d\\mathbf{x}dy.$$\n",
":eqlabel:`eq_true-risk`\n",
"\n",
"然而在实践中,我们通常无法获得总体数据。\n",
"因此,*经验风险最小化*即在 :eqref:`eq_empirical-risk-min`中最小化经验风险,\n",
"是一种实用的机器学习策略,希望能近似最小化真实风险。\n",
"\n",
"### 协变量偏移纠正\n",
":label:`subsec_covariate-shift-correction`\n",
"\n",
"假设对于带标签的数据$(\\mathbf{x}_i, y_i)$\n",
"我们要评估$P(y \\mid \\mathbf{x})$。\n",
"然而观测值$\\mathbf{x}_i$是从某些*源分布*$q(\\mathbf{x})$中得出的,\n",
"而不是从*目标分布*$p(\\mathbf{x})$中得出的。\n",
"幸运的是,依赖性假设意味着条件分布保持不变,即:\n",
"$p(y \\mid \\mathbf{x}) = q(y \\mid \\mathbf{x})$。\n",
"如果源分布$q(\\mathbf{x})$是“错误的”,\n",
"我们可以通过在真实风险的计算中,使用以下简单的恒等式来进行纠正:\n",
"\n",
"$$\n",
"\\begin{aligned}\n",
"\\int\\int l(f(\\mathbf{x}), y) p(y \\mid \\mathbf{x})p(\\mathbf{x}) \\;d\\mathbf{x}dy =\n",
"\\int\\int l(f(\\mathbf{x}), y) q(y \\mid \\mathbf{x})q(\\mathbf{x})\\frac{p(\\mathbf{x})}{q(\\mathbf{x})} \\;d\\mathbf{x}dy.\n",
"\\end{aligned}\n",
"$$\n",
"\n",
"换句话说,我们需要根据数据来自正确分布与来自错误分布的概率之比,\n",
"来重新衡量每个数据样本的权重:\n",
"\n",
"$$\\beta_i \\stackrel{\\mathrm{def}}{=} \\frac{p(\\mathbf{x}_i)}{q(\\mathbf{x}_i)}.$$\n",
"\n",
"将权重$\\beta_i$代入到每个数据样本$(\\mathbf{x}_i, y_i)$中,\n",
"我们可以使用”加权经验风险最小化“来训练模型:\n",
"\n",
"$$\\mathop{\\mathrm{minimize}}_f \\frac{1}{n} \\sum_{i=1}^n \\beta_i l(f(\\mathbf{x}_i), y_i).$$\n",
":eqlabel:`eq_weighted-empirical-risk-min`\n",
"\n",
"由于不知道这个比率,我们需要估计它。\n",
"有许多方法都可以用,包括一些花哨的算子理论方法,\n",
"试图直接使用最小范数或最大熵原理重新校准期望算子。\n",
"对于任意一种这样的方法,我们都需要从两个分布中抽取样本:\n",
"“真实”的分布$p$,通过访问测试数据获取;\n",
"训练集$q$,通过人工合成的很容易获得。\n",
"请注意,我们只需要特征$\\mathbf{x} \\sim p(\\mathbf{x})$\n",
"不需要访问标签$y \\sim p(y)$。\n",
"\n",
"在这种情况下,有一种非常有效的方法可以得到几乎与原始方法一样好的结果:\n",
"*对数几率回归*logistic regression)。\n",
"这是用于二元分类的softmax回归(见 :numref:`sec_softmax`)的一个特例。\n",
"综上所述,我们学习了一个分类器来区分从$p(\\mathbf{x})$抽取的数据\n",
"和从$q(\\mathbf{x})$抽取的数据。\n",
"如果无法区分这两个分布,则意味着相关的样本可能来自这两个分布中的任何一个。\n",
"另一方面,任何可以很好区分的样本都应该相应地显著增加或减少权重。\n",
"\n",
"为了简单起见,假设我们分别从$p(\\mathbf{x})$和$q(\\mathbf{x})$\n",
"两个分布中抽取相同数量的样本。\n",
"现在用$z$标签表示:从$p$抽取的数据为$1$,从$q$抽取的数据为$-1$。\n",
"然后,混合数据集中的概率由下式给出\n",
"\n",
"$$P(z=1 \\mid \\mathbf{x}) = \\frac{p(\\mathbf{x})}{p(\\mathbf{x})+q(\\mathbf{x})} \\text{ and hence } \\frac{P(z=1 \\mid \\mathbf{x})}{P(z=-1 \\mid \\mathbf{x})} = \\frac{p(\\mathbf{x})}{q(\\mathbf{x})}.$$\n",
"\n",
"因此,如果我们使用对数几率回归方法,其中\n",
"$P(z=1 \\mid \\mathbf{x})=\\frac{1}{1+\\exp(-h(\\mathbf{x}))}$\n",
"($h$是一个参数化函数),则很自然有:\n",
"\n",
"$$\n",
"\\beta_i = \\frac{1/(1 + \\exp(-h(\\mathbf{x}_i)))}{\\exp(-h(\\mathbf{x}_i))/(1 + \\exp(-h(\\mathbf{x}_i)))} = \\exp(h(\\mathbf{x}_i)).\n",
"$$\n",
"\n",
"因此,我们需要解决两个问题:\n",
"第一个问题是关于区分来自两个分布的数据;\n",
"第二个问题是关于 :eqref:`eq_weighted-empirical-risk-min`\n",
"中的加权经验风险的最小化问题。\n",
"在这个问题中,我们将对其中的项加权$\\beta_i$。\n",
"\n",
"现在,我们来看一下完整的协变量偏移纠正算法。\n",
"假设我们有一个训练集$\\{(\\mathbf{x}_1, y_1), \\ldots, (\\mathbf{x}_n, y_n)\\}$\n",
"和一个未标记的测试集$\\{\\mathbf{u}_1, \\ldots, \\mathbf{u}_m\\}$。\n",
"对于协变量偏移,我们假设$1 \\leq i \\leq n$的$\\mathbf{x}_i$来自某个源分布,\n",
"$\\mathbf{u}_i$来自目标分布。\n",
"以下是纠正协变量偏移的典型算法:\n",
"\n",
"1. 生成一个二元分类训练集:$\\{(\\mathbf{x}_1, -1), \\ldots, (\\mathbf{x}_n, -1), (\\mathbf{u}_1, 1), \\ldots, (\\mathbf{u}_m, 1)\\}$。\n",
"1. 用对数几率回归训练二元分类器得到函数$h$。\n",
"1. 使用$\\beta_i = \\exp(h(\\mathbf{x}_i))$或更好的$\\beta_i = \\min(\\exp(h(\\mathbf{x}_i)), c)$$c$为常量)对训练数据进行加权。\n",
"1. 使用权重$\\beta_i$进行 :eqref:`eq_weighted-empirical-risk-min` 中$\\{(\\mathbf{x}_1, y_1), \\ldots, (\\mathbf{x}_n, y_n)\\}$的训练。\n",
"\n",
"请注意,上述算法依赖于一个重要的假设:\n",
"需要目标分布(例如,测试分布)中的每个数据样本在训练时出现的概率非零。\n",
"如果我们找到$p(\\mathbf{x}) > 0$但$q(\\mathbf{x}) = 0$的点,\n",
"那么相应的重要性权重会是无穷大。\n",
"\n",
"### 标签偏移纠正\n",
"\n",
"假设我们处理的是$k$个类别的分类任务。\n",
"使用 :numref:`subsec_covariate-shift-correction`中相同符号,\n",
"$q$和$p$中分别是源分布(例如训练时的分布)和目标分布(例如测试时的分布)。\n",
"假设标签的分布随时间变化:$q(y) \\neq p(y)$\n",
"但类别条件分布保持不变:$q(\\mathbf{x} \\mid y)=p(\\mathbf{x} \\mid y)$。\n",
"如果源分布$q(y)$是“错误的”,\n",
"我们可以根据 :eqref:`eq_true-risk`中定义的真实风险中的恒等式进行更正:\n",
"\n",
"$$\n",
"\\begin{aligned}\n",
"\\int\\int l(f(\\mathbf{x}), y) p(\\mathbf{x} \\mid y)p(y) \\;d\\mathbf{x}dy =\n",
"\\int\\int l(f(\\mathbf{x}), y) q(\\mathbf{x} \\mid y)q(y)\\frac{p(y)}{q(y)} \\;d\\mathbf{x}dy.\n",
"\\end{aligned}\n",
"$$\n",
"\n",
"这里,重要性权重将对应于标签似然比率\n",
"\n",
"$$\\beta_i \\stackrel{\\mathrm{def}}{=} \\frac{p(y_i)}{q(y_i)}.$$\n",
"\n",
"标签偏移的一个好处是,如果我们在源分布上有一个相当好的模型,\n",
"那么我们可以得到对这些权重的一致估计,而不需要处理周边的其他维度。\n",
"在深度学习中,输入往往是高维对象(如图像),而标签通常是低维(如类别)。\n",
"\n",
"为了估计目标标签分布,我们首先采用性能相当好的现成的分类器(通常基于训练数据进行训练),\n",
"并使用验证集(也来自训练分布)计算其混淆矩阵。\n",
"混淆矩阵$\\mathbf{C}$是一个$k \\times k$矩阵,\n",
"其中每列对应于标签类别,每行对应于模型的预测类别。\n",
"每个单元格的值$c_{ij}$是验证集中,真实标签为$j$\n",
"而我们的模型预测为$i$的样本数量所占的比例。\n",
"\n",
"现在,我们不能直接计算目标数据上的混淆矩阵,\n",
"因为我们无法看到真实环境下的样本的标签,\n",
"除非我们再搭建一个复杂的实时标注流程。\n",
"然而,我们所能做的是将所有模型在测试时的预测取平均数,\n",
"得到平均模型输出$\\mu(\\hat{\\mathbf{y}}) \\in \\mathbb{R}^k$\n",
"其中第$i$个元素$\\mu(\\hat{y}_i)$是我们模型预测测试集中$i$的总预测分数。\n",
"\n",
"结果表明,如果我们的分类器一开始就相当准确,\n",
"并且目标数据只包含我们以前见过的类别,\n",
"以及如果标签偏移假设成立(这里最强的假设),\n",
"我们就可以通过求解一个简单的线性系统来估计测试集的标签分布\n",
"\n",
"$$\\mathbf{C} p(\\mathbf{y}) = \\mu(\\hat{\\mathbf{y}}),$$\n",
"\n",
"因为作为一个估计,$\\sum_{j=1}^k c_{ij} p(y_j) = \\mu(\\hat{y}_i)$\n",
"对所有$1 \\leq i \\leq k$成立,\n",
"其中$p(y_j)$是$k$维标签分布向量$p(\\mathbf{y})$的第$j^\\mathrm{th}$元素。\n",
"如果我们的分类器一开始就足够精确,那么混淆矩阵$\\mathbf{C}$将是可逆的,\n",
"进而我们可以得到一个解$p(\\mathbf{y}) = \\mathbf{C}^{-1} \\mu(\\hat{\\mathbf{y}})$。\n",
"\n",
"因为我们观测源数据上的标签,所以很容易估计分布$q(y)$。\n",
"那么对于标签为$y_i$的任何训练样本$i$\n",
"我们可以使用我们估计的$p(y_i)/q(y_i)$比率来计算权重$\\beta_i$\n",
"并将其代入 :eqref:`eq_weighted-empirical-risk-min`中的加权经验风险最小化中。\n",
"\n",
"### 概念偏移纠正\n",
"\n",
"概念偏移很难用原则性的方式解决。\n",
"例如,在一个问题突然从“区分猫和狗”偏移为“区分白色和黑色动物”的情况下,\n",
"除了从零开始收集新标签和训练,别无妙方。\n",
"幸运的是,在实践中这种极端的偏移是罕见的。\n",
"相反,通常情况下,概念的变化总是缓慢的。\n",
"比如下面是一些例子:\n",
"\n",
"* 在计算广告中,新产品推出后,旧产品变得不那么受欢迎了。这意味着广告的分布和受欢迎程度是逐渐变化的,任何点击率预测器都需要随之逐渐变化;\n",
"* 由于环境的磨损,交通摄像头的镜头会逐渐退化,影响摄像头的图像质量;\n",
"* 新闻内容逐渐变化(即新新闻的出现)。\n",
"\n",
"在这种情况下,我们可以使用与训练网络相同的方法,使其适应数据的变化。\n",
"换言之,我们使用新数据更新现有的网络权重,而不是从头开始训练。\n",
"\n",
"## 学习问题的分类法\n",
"\n",
"有了如何处理分布变化的知识,我们现在可以考虑机器学习问题形式化的其他方面。\n",
"\n",
"### 批量学习\n",
"\n",
"在*批量学习*batch learning)中,我们可以访问一组训练特征和标签\n",
"$\\{(\\mathbf{x}_1, y_1), \\ldots, (\\mathbf{x}_n, y_n)\\}$\n",
"我们使用这些特性和标签训练$f(\\mathbf{x})$。\n",
"然后,我们部署此模型来对来自同一分布的新数据$(\\mathbf{x}, y)$进行评分。\n",
"例如,我们可以根据猫和狗的大量图片训练猫检测器。\n",
"一旦我们训练了它,我们就把它作为智能猫门计算视觉系统的一部分,来控制只允许猫进入。\n",
"然后这个系统会被安装在客户家中,基本再也不会更新。\n",
"\n",
"### 在线学习\n",
"\n",
"除了“批量”地学习,我们还可以单个“在线”学习数据$(\\mathbf{x}_i, y_i)$。\n",
"更具体地说,我们首先观测到$\\mathbf{x}_i$\n",
"然后我们得出一个估计值$f(\\mathbf{x}_i)$\n",
"只有当我们做到这一点后,我们才观测到$y_i$。\n",
"然后根据我们的决定,我们会得到奖励或损失。\n",
"许多实际问题都属于这一类。\n",
"例如,我们需要预测明天的股票价格,\n",
"这样我们就可以根据这个预测进行交易。\n",
"在一天结束时,我们会评估我们的预测是否盈利。\n",
"换句话说,在*在线学习*online learning)中,我们有以下的循环。\n",
"在这个循环中,给定新的观测结果,我们会不断地改进我们的模型。\n",
"\n",
"$$\n",
"\\mathrm{model} ~ f_t \\longrightarrow\n",
"\\mathrm{data} ~ \\mathbf{x}_t \\longrightarrow\n",
"\\mathrm{estimate} ~ f_t(\\mathbf{x}_t) \\longrightarrow\n",
"\\mathrm{observation} ~ y_t \\longrightarrow\n",
"\\mathrm{loss} ~ l(y_t, f_t(\\mathbf{x}_t)) \\longrightarrow\n",
"\\mathrm{model} ~ f_{t+1}\n",
"$$\n",
"\n",
"### 老虎机\n",
"\n",
"*老虎机*(bandits)是上述问题的一个特例。\n",
"虽然在大多数学习问题中,我们有一个连续参数化的函数$f$(例如,一个深度网络)。\n",
"但在一个*老虎机*问题中,我们只有有限数量的手臂可以拉动。\n",
"也就是说,我们可以采取的行动是有限的。\n",
"对于这个更简单的问题,可以获得更强的最优性理论保证,这并不令人惊讶。\n",
"我们之所以列出它,主要是因为这个问题经常被视为一个单独的学习问题的情景。\n",
"\n",
"### 控制\n",
"\n",
"在很多情况下,环境会记住我们所做的事。\n",
"不一定是以一种对抗的方式,但它会记住,而且它的反应将取决于之前发生的事情。\n",
"例如,咖啡锅炉控制器将根据之前是否加热锅炉来观测到不同的温度。\n",
"在这种情况下,PID(比例—积分—微分)控制器算法是一个流行的选择。\n",
"同样,一个用户在新闻网站上的行为将取决于之前向她展示的内容(例如,大多数新闻她只阅读一次)。\n",
"许多这样的算法形成了一个环境模型,在这个模型中,他们的行为使得他们的决策看起来不那么随机。\n",
"近年来,控制理论(如PID的变体)也被用于自动调整超参数,\n",
"以获得更好的解构和重建质量,提高生成文本的多样性和生成图像的重建质量\n",
" :cite:`Shao.Yao.Sun.ea.2020`。\n",
"\n",
"### 强化学习\n",
"\n",
"*强化学习*reinforcement learning)强调如何基于环境而行动,以取得最大化的预期利益。\n",
"国际象棋、围棋、西洋双陆棋或星际争霸都是强化学习的应用实例。\n",
"再比如,为自动驾驶汽车制造一个控制器,或者以其他方式对自动驾驶汽车的驾驶方式做出反应\n",
"(例如,试图避开某物体,试图造成事故,或者试图与其合作)。\n",
"\n",
"### 考虑到环境\n",
"\n",
"上述不同情况之间的一个关键区别是:\n",
"在静止环境中可能一直有效的相同策略,\n",
"在环境能够改变的情况下可能不会始终有效。\n",
"例如,一个交易者发现的套利机会很可能在他开始利用它时就消失了。\n",
"环境变化的速度和方式在很大程度上决定了我们可以采用的算法类型。\n",
"例如,如果我们知道事情只会缓慢地变化,\n",
"就可以迫使任何估计也只能缓慢地发生改变。\n",
"如果我们知道环境可能会瞬间发生变化,但这种变化非常罕见,\n",
"我们就可以在使用算法时考虑到这一点。\n",
"当一个数据科学家试图解决的问题会随着时间的推移而发生变化时,\n",
"这些类型的知识至关重要。\n",
"\n",
"## 机器学习中的公平、责任和透明度\n",
"\n",
"最后,重要的是,当我们部署机器学习系统时,\n",
"不仅仅是在优化一个预测模型,\n",
"而通常是在提供一个会被用来(部分或完全)进行自动化决策的工具。\n",
"这些技术系统可能会通过其进行的决定而影响到每个人的生活。\n",
"\n",
"从考虑预测到决策的飞跃不仅提出了新的技术问题,\n",
"而且还提出了一系列必须仔细考虑的伦理问题。\n",
"如果我们正在部署一个医疗诊断系统,我们需要知道它可能适用于哪些人群,哪些人群可能无效。\n",
"忽视对一个亚群体的幸福的可预见风险可能会导致我们执行劣质的护理水平。\n",
"此外,一旦我们规划整个决策系统,我们必须退后一步,重新考虑如何评估我们的技术。\n",
"在这个视野变化所导致的结果中,我们会发现精度很少成为合适的衡量标准。\n",
"例如,当我们将预测转化为行动时,我们通常会考虑到各种方式犯错的潜在成本敏感性。\n",
"举个例子:将图像错误地分到某一类别可能被视为种族歧视,而错误地分到另一个类别是无害的,\n",
"那么我们可能需要相应地调整我们的阈值,在设计决策方式时考虑到这些社会价值。\n",
"我们还需要注意预测系统如何导致反馈循环。\n",
"例如,考虑预测性警务系统,它将巡逻人员分配到预测犯罪率较高的地区。\n",
"很容易看出一种令人担忧的模式是如何出现的:\n",
"\n",
" 1. 犯罪率高的社区会得到更多的巡逻;\n",
" 2. 因此,在这些社区中会发现更多的犯罪行为,输入可用于未来迭代的训练数据;\n",
" 3. 面对更多的积极因素,该模型预测这些社区还会有更多的犯罪;\n",
" 4. 下一次迭代中,更新后的模型会更加倾向于针对同一个地区,这会导致更多的犯罪行为被发现等等。\n",
"\n",
"通常,在建模纠正过程中,模型的预测与训练数据耦合的各种机制都没有得到解释,\n",
"研究人员称之为“失控反馈循环”的现象。\n",
"此外,我们首先要注意我们是否解决了正确的问题。\n",
"比如,预测算法现在在信息传播中起着巨大的中介作用,\n",
"个人看到的新闻应该由他们喜欢的Facebook页面决定吗?\n",
"这些只是在机器学习职业生涯中可能遇到的令人感到“压力山大”的道德困境中的一小部分。\n",
"\n",
"## 小结\n",
"\n",
"* 在许多情况下,训练集和测试集并不来自同一个分布。这就是所谓的分布偏移。\n",
"* 真实风险是从真实分布中抽取的所有数据的总体损失的预期。然而,这个数据总体通常是无法获得的。经验风险是训练数据的平均损失,用于近似真实风险。在实践中,我们进行经验风险最小化。\n",
"* 在相应的假设条件下,可以在测试时检测并纠正协变量偏移和标签偏移。在测试时,不考虑这种偏移可能会成为问题。\n",
"* 在某些情况下,环境可能会记住自动操作并以令人惊讶的方式做出响应。在构建模型时,我们必须考虑到这种可能性,并继续监控实时系统,并对我们的模型和环境以意想不到的方式纠缠在一起的可能性持开放态度。\n",
"\n",
"## 练习\n",
"\n",
"1. 当我们改变搜索引擎的行为时会发生什么?用户可能会做什么?广告商呢?\n",
"2. 实现一个协变量偏移检测器。提示:构建一个分类器。\n",
"3. 实现协变量偏移纠正。\n",
"4. 除了分布偏移,还有什么会影响经验风险接近真实风险的程度?\n",
"\n",
"[Discussions](https://discuss.d2l.ai/t/1822)\n"
]
}
],
"metadata": {
"language_info": {
"name": "python"
},
"required_libs": []
},
"nbformat": 4,
"nbformat_minor": 5
}
@@ -0,0 +1,48 @@
{
"cells": [
{
"cell_type": "markdown",
"id": "f0f1791b",
"metadata": {
"origin_pos": 0
},
"source": [
"# 多层感知机\n",
":label:`chap_perceptrons`\n",
"\n",
"在本章中,我们将第一次介绍真正的*深度*网络。\n",
"最简单的深度网络称为*多层感知机*。多层感知机由多层神经元组成,\n",
"每一层与它的上一层相连,从中接收输入;\n",
"同时每一层也与它的下一层相连,影响当前层的神经元。\n",
"当我们训练容量较大的模型时,我们面临着*过拟合*的风险。\n",
"因此,本章将从基本的概念介绍开始讲起,包括*过拟合*、*欠拟合*和模型选择。\n",
"为了解决这些问题,本章将介绍*权重衰减*和*暂退法*等正则化技术。\n",
"我们还将讨论数值稳定性和参数初始化相关的问题,\n",
"这些问题是成功训练深度网络的关键。\n",
"在本章的最后,我们将把所介绍的内容应用到一个真实的案例:房价预测。\n",
"关于模型计算性能、可伸缩性和效率相关的问题,我们将放在后面的章节中讨论。\n",
"\n",
":begin_tab:toc\n",
" - [mlp](mlp.ipynb)\n",
" - [mlp-scratch](mlp-scratch.ipynb)\n",
" - [mlp-concise](mlp-concise.ipynb)\n",
" - [underfit-overfit](underfit-overfit.ipynb)\n",
" - [weight-decay](weight-decay.ipynb)\n",
" - [dropout](dropout.ipynb)\n",
" - [backprop](backprop.ipynb)\n",
" - [numerical-stability-and-init](numerical-stability-and-init.ipynb)\n",
" - [environment](environment.ipynb)\n",
" - [kaggle-house-price](kaggle-house-price.ipynb)\n",
":end_tab:\n"
]
}
],
"metadata": {
"language_info": {
"name": "python"
},
"required_libs": []
},
"nbformat": 4,
"nbformat_minor": 5
}
File diff suppressed because it is too large Load Diff
@@ -0,0 +1,976 @@
{
"cells": [
{
"cell_type": "markdown",
"id": "d5217b24",
"metadata": {
"origin_pos": 0
},
"source": [
"# 多层感知机的简洁实现\n",
":label:`sec_mlp_concise`\n",
"\n",
"本节将介绍(**通过高级API更简洁地实现多层感知机**)。\n"
]
},
{
"cell_type": "code",
"execution_count": 1,
"id": "f4b9d183",
"metadata": {
"execution": {
"iopub.execute_input": "2023-08-18T07:04:20.711610Z",
"iopub.status.busy": "2023-08-18T07:04:20.711337Z",
"iopub.status.idle": "2023-08-18T07:04:22.715766Z",
"shell.execute_reply": "2023-08-18T07:04:22.714884Z"
},
"origin_pos": 2,
"tab": [
"pytorch"
]
},
"outputs": [],
"source": [
"import torch\n",
"from torch import nn\n",
"from d2l import torch as d2l"
]
},
{
"cell_type": "markdown",
"id": "d1b8af0c",
"metadata": {
"origin_pos": 5
},
"source": [
"## 模型\n",
"\n",
"与softmax回归的简洁实现( :numref:`sec_softmax_concise`)相比,\n",
"唯一的区别是我们添加了2个全连接层(之前我们只添加了1个全连接层)。\n",
"第一层是[**隐藏层**],它(**包含256个隐藏单元,并使用了ReLU激活函数**)。\n",
"第二层是输出层。\n"
]
},
{
"cell_type": "code",
"execution_count": 2,
"id": "a11cfbe9",
"metadata": {
"execution": {
"iopub.execute_input": "2023-08-18T07:04:22.719981Z",
"iopub.status.busy": "2023-08-18T07:04:22.719298Z",
"iopub.status.idle": "2023-08-18T07:04:22.748628Z",
"shell.execute_reply": "2023-08-18T07:04:22.747813Z"
},
"origin_pos": 7,
"tab": [
"pytorch"
]
},
"outputs": [],
"source": [
"net = nn.Sequential(nn.Flatten(),\n",
" nn.Linear(784, 256),\n",
" nn.ReLU(),\n",
" nn.Linear(256, 10))\n",
"\n",
"def init_weights(m):\n",
" if type(m) == nn.Linear:\n",
" nn.init.normal_(m.weight, std=0.01)\n",
"\n",
"net.apply(init_weights);"
]
},
{
"cell_type": "markdown",
"id": "f5aceed6",
"metadata": {
"origin_pos": 10
},
"source": [
"[**训练过程**]的实现与我们实现softmax回归时完全相同,\n",
"这种模块化设计使我们能够将与模型架构有关的内容独立出来。\n"
]
},
{
"cell_type": "code",
"execution_count": 3,
"id": "b23e8ab9",
"metadata": {
"execution": {
"iopub.execute_input": "2023-08-18T07:04:22.753701Z",
"iopub.status.busy": "2023-08-18T07:04:22.753406Z",
"iopub.status.idle": "2023-08-18T07:04:22.758051Z",
"shell.execute_reply": "2023-08-18T07:04:22.757284Z"
},
"origin_pos": 12,
"tab": [
"pytorch"
]
},
"outputs": [],
"source": [
"batch_size, lr, num_epochs = 256, 0.1, 10\n",
"loss = nn.CrossEntropyLoss(reduction='none')\n",
"trainer = torch.optim.SGD(net.parameters(), lr=lr)"
]
},
{
"cell_type": "code",
"execution_count": 4,
"id": "78ac9bf1",
"metadata": {
"execution": {
"iopub.execute_input": "2023-08-18T07:04:22.761842Z",
"iopub.status.busy": "2023-08-18T07:04:22.761295Z",
"iopub.status.idle": "2023-08-18T07:05:05.308680Z",
"shell.execute_reply": "2023-08-18T07:05:05.307786Z"
},
"origin_pos": 15,
"tab": [
"pytorch"
]
},
"outputs": [
{
"data": {
"image/svg+xml": [
"<?xml version=\"1.0\" encoding=\"utf-8\" standalone=\"no\"?>\n",
"<!DOCTYPE svg PUBLIC \"-//W3C//DTD SVG 1.1//EN\"\n",
" \"http://www.w3.org/Graphics/SVG/1.1/DTD/svg11.dtd\">\n",
"<svg xmlns:xlink=\"http://www.w3.org/1999/xlink\" width=\"238.965625pt\" height=\"180.65625pt\" viewBox=\"0 0 238.965625 180.65625\" xmlns=\"http://www.w3.org/2000/svg\" version=\"1.1\">\n",
" <metadata>\n",
" <rdf:RDF xmlns:dc=\"http://purl.org/dc/elements/1.1/\" xmlns:cc=\"http://creativecommons.org/ns#\" xmlns:rdf=\"http://www.w3.org/1999/02/22-rdf-syntax-ns#\">\n",
" <cc:Work>\n",
" <dc:type rdf:resource=\"http://purl.org/dc/dcmitype/StillImage\"/>\n",
" <dc:date>2023-08-18T07:05:05.270258</dc:date>\n",
" <dc:format>image/svg+xml</dc:format>\n",
" <dc:creator>\n",
" <cc:Agent>\n",
" <dc:title>Matplotlib v3.5.1, https://matplotlib.org/</dc:title>\n",
" </cc:Agent>\n",
" </dc:creator>\n",
" </cc:Work>\n",
" </rdf:RDF>\n",
" </metadata>\n",
" <defs>\n",
" <style type=\"text/css\">*{stroke-linejoin: round; stroke-linecap: butt}</style>\n",
" </defs>\n",
" <g id=\"figure_1\">\n",
" <g id=\"patch_1\">\n",
" <path d=\"M 0 180.65625 \n",
"L 238.965625 180.65625 \n",
"L 238.965625 0 \n",
"L 0 0 \n",
"L 0 180.65625 \n",
"z\n",
"\" style=\"fill: none\"/>\n",
" </g>\n",
" <g id=\"axes_1\">\n",
" <g id=\"patch_2\">\n",
" <path d=\"M 30.103125 143.1 \n",
"L 225.403125 143.1 \n",
"L 225.403125 7.2 \n",
"L 30.103125 7.2 \n",
"z\n",
"\" style=\"fill: #ffffff\"/>\n",
" </g>\n",
" <g id=\"matplotlib.axis_1\">\n",
" <g id=\"xtick_1\">\n",
" <g id=\"line2d_1\">\n",
" <path d=\"M 51.803125 143.1 \n",
"L 51.803125 7.2 \n",
"\" clip-path=\"url(#p38f7277f50)\" style=\"fill: none; stroke: #b0b0b0; stroke-width: 0.8; stroke-linecap: square\"/>\n",
" </g>\n",
" <g id=\"line2d_2\">\n",
" <defs>\n",
" <path id=\"m69cc5df15a\" d=\"M 0 0 \n",
"L 0 3.5 \n",
"\" style=\"stroke: #000000; stroke-width: 0.8\"/>\n",
" </defs>\n",
" <g>\n",
" <use xlink:href=\"#m69cc5df15a\" x=\"51.803125\" y=\"143.1\" style=\"stroke: #000000; stroke-width: 0.8\"/>\n",
" </g>\n",
" </g>\n",
" <g id=\"text_1\">\n",
" <!-- 2 -->\n",
" <g transform=\"translate(48.621875 157.698438)scale(0.1 -0.1)\">\n",
" <defs>\n",
" <path id=\"DejaVuSans-32\" d=\"M 1228 531 \n",
"L 3431 531 \n",
"L 3431 0 \n",
"L 469 0 \n",
"L 469 531 \n",
"Q 828 903 1448 1529 \n",
"Q 2069 2156 2228 2338 \n",
"Q 2531 2678 2651 2914 \n",
"Q 2772 3150 2772 3378 \n",
"Q 2772 3750 2511 3984 \n",
"Q 2250 4219 1831 4219 \n",
"Q 1534 4219 1204 4116 \n",
"Q 875 4013 500 3803 \n",
"L 500 4441 \n",
"Q 881 4594 1212 4672 \n",
"Q 1544 4750 1819 4750 \n",
"Q 2544 4750 2975 4387 \n",
"Q 3406 4025 3406 3419 \n",
"Q 3406 3131 3298 2873 \n",
"Q 3191 2616 2906 2266 \n",
"Q 2828 2175 2409 1742 \n",
"Q 1991 1309 1228 531 \n",
"z\n",
"\" transform=\"scale(0.015625)\"/>\n",
" </defs>\n",
" <use xlink:href=\"#DejaVuSans-32\"/>\n",
" </g>\n",
" </g>\n",
" </g>\n",
" <g id=\"xtick_2\">\n",
" <g id=\"line2d_3\">\n",
" <path d=\"M 95.203125 143.1 \n",
"L 95.203125 7.2 \n",
"\" clip-path=\"url(#p38f7277f50)\" style=\"fill: none; stroke: #b0b0b0; stroke-width: 0.8; stroke-linecap: square\"/>\n",
" </g>\n",
" <g id=\"line2d_4\">\n",
" <g>\n",
" <use xlink:href=\"#m69cc5df15a\" x=\"95.203125\" y=\"143.1\" style=\"stroke: #000000; stroke-width: 0.8\"/>\n",
" </g>\n",
" </g>\n",
" <g id=\"text_2\">\n",
" <!-- 4 -->\n",
" <g transform=\"translate(92.021875 157.698438)scale(0.1 -0.1)\">\n",
" <defs>\n",
" <path id=\"DejaVuSans-34\" d=\"M 2419 4116 \n",
"L 825 1625 \n",
"L 2419 1625 \n",
"L 2419 4116 \n",
"z\n",
"M 2253 4666 \n",
"L 3047 4666 \n",
"L 3047 1625 \n",
"L 3713 1625 \n",
"L 3713 1100 \n",
"L 3047 1100 \n",
"L 3047 0 \n",
"L 2419 0 \n",
"L 2419 1100 \n",
"L 313 1100 \n",
"L 313 1709 \n",
"L 2253 4666 \n",
"z\n",
"\" transform=\"scale(0.015625)\"/>\n",
" </defs>\n",
" <use xlink:href=\"#DejaVuSans-34\"/>\n",
" </g>\n",
" </g>\n",
" </g>\n",
" <g id=\"xtick_3\">\n",
" <g id=\"line2d_5\">\n",
" <path d=\"M 138.603125 143.1 \n",
"L 138.603125 7.2 \n",
"\" clip-path=\"url(#p38f7277f50)\" style=\"fill: none; stroke: #b0b0b0; stroke-width: 0.8; stroke-linecap: square\"/>\n",
" </g>\n",
" <g id=\"line2d_6\">\n",
" <g>\n",
" <use xlink:href=\"#m69cc5df15a\" x=\"138.603125\" y=\"143.1\" style=\"stroke: #000000; stroke-width: 0.8\"/>\n",
" </g>\n",
" </g>\n",
" <g id=\"text_3\">\n",
" <!-- 6 -->\n",
" <g transform=\"translate(135.421875 157.698438)scale(0.1 -0.1)\">\n",
" <defs>\n",
" <path id=\"DejaVuSans-36\" d=\"M 2113 2584 \n",
"Q 1688 2584 1439 2293 \n",
"Q 1191 2003 1191 1497 \n",
"Q 1191 994 1439 701 \n",
"Q 1688 409 2113 409 \n",
"Q 2538 409 2786 701 \n",
"Q 3034 994 3034 1497 \n",
"Q 3034 2003 2786 2293 \n",
"Q 2538 2584 2113 2584 \n",
"z\n",
"M 3366 4563 \n",
"L 3366 3988 \n",
"Q 3128 4100 2886 4159 \n",
"Q 2644 4219 2406 4219 \n",
"Q 1781 4219 1451 3797 \n",
"Q 1122 3375 1075 2522 \n",
"Q 1259 2794 1537 2939 \n",
"Q 1816 3084 2150 3084 \n",
"Q 2853 3084 3261 2657 \n",
"Q 3669 2231 3669 1497 \n",
"Q 3669 778 3244 343 \n",
"Q 2819 -91 2113 -91 \n",
"Q 1303 -91 875 529 \n",
"Q 447 1150 447 2328 \n",
"Q 447 3434 972 4092 \n",
"Q 1497 4750 2381 4750 \n",
"Q 2619 4750 2861 4703 \n",
"Q 3103 4656 3366 4563 \n",
"z\n",
"\" transform=\"scale(0.015625)\"/>\n",
" </defs>\n",
" <use xlink:href=\"#DejaVuSans-36\"/>\n",
" </g>\n",
" </g>\n",
" </g>\n",
" <g id=\"xtick_4\">\n",
" <g id=\"line2d_7\">\n",
" <path d=\"M 182.003125 143.1 \n",
"L 182.003125 7.2 \n",
"\" clip-path=\"url(#p38f7277f50)\" style=\"fill: none; stroke: #b0b0b0; stroke-width: 0.8; stroke-linecap: square\"/>\n",
" </g>\n",
" <g id=\"line2d_8\">\n",
" <g>\n",
" <use xlink:href=\"#m69cc5df15a\" x=\"182.003125\" y=\"143.1\" style=\"stroke: #000000; stroke-width: 0.8\"/>\n",
" </g>\n",
" </g>\n",
" <g id=\"text_4\">\n",
" <!-- 8 -->\n",
" <g transform=\"translate(178.821875 157.698438)scale(0.1 -0.1)\">\n",
" <defs>\n",
" <path id=\"DejaVuSans-38\" d=\"M 2034 2216 \n",
"Q 1584 2216 1326 1975 \n",
"Q 1069 1734 1069 1313 \n",
"Q 1069 891 1326 650 \n",
"Q 1584 409 2034 409 \n",
"Q 2484 409 2743 651 \n",
"Q 3003 894 3003 1313 \n",
"Q 3003 1734 2745 1975 \n",
"Q 2488 2216 2034 2216 \n",
"z\n",
"M 1403 2484 \n",
"Q 997 2584 770 2862 \n",
"Q 544 3141 544 3541 \n",
"Q 544 4100 942 4425 \n",
"Q 1341 4750 2034 4750 \n",
"Q 2731 4750 3128 4425 \n",
"Q 3525 4100 3525 3541 \n",
"Q 3525 3141 3298 2862 \n",
"Q 3072 2584 2669 2484 \n",
"Q 3125 2378 3379 2068 \n",
"Q 3634 1759 3634 1313 \n",
"Q 3634 634 3220 271 \n",
"Q 2806 -91 2034 -91 \n",
"Q 1263 -91 848 271 \n",
"Q 434 634 434 1313 \n",
"Q 434 1759 690 2068 \n",
"Q 947 2378 1403 2484 \n",
"z\n",
"M 1172 3481 \n",
"Q 1172 3119 1398 2916 \n",
"Q 1625 2713 2034 2713 \n",
"Q 2441 2713 2670 2916 \n",
"Q 2900 3119 2900 3481 \n",
"Q 2900 3844 2670 4047 \n",
"Q 2441 4250 2034 4250 \n",
"Q 1625 4250 1398 4047 \n",
"Q 1172 3844 1172 3481 \n",
"z\n",
"\" transform=\"scale(0.015625)\"/>\n",
" </defs>\n",
" <use xlink:href=\"#DejaVuSans-38\"/>\n",
" </g>\n",
" </g>\n",
" </g>\n",
" <g id=\"xtick_5\">\n",
" <g id=\"line2d_9\">\n",
" <path d=\"M 225.403125 143.1 \n",
"L 225.403125 7.2 \n",
"\" clip-path=\"url(#p38f7277f50)\" style=\"fill: none; stroke: #b0b0b0; stroke-width: 0.8; stroke-linecap: square\"/>\n",
" </g>\n",
" <g id=\"line2d_10\">\n",
" <g>\n",
" <use xlink:href=\"#m69cc5df15a\" x=\"225.403125\" y=\"143.1\" style=\"stroke: #000000; stroke-width: 0.8\"/>\n",
" </g>\n",
" </g>\n",
" <g id=\"text_5\">\n",
" <!-- 10 -->\n",
" <g transform=\"translate(219.040625 157.698438)scale(0.1 -0.1)\">\n",
" <defs>\n",
" <path id=\"DejaVuSans-31\" d=\"M 794 531 \n",
"L 1825 531 \n",
"L 1825 4091 \n",
"L 703 3866 \n",
"L 703 4441 \n",
"L 1819 4666 \n",
"L 2450 4666 \n",
"L 2450 531 \n",
"L 3481 531 \n",
"L 3481 0 \n",
"L 794 0 \n",
"L 794 531 \n",
"z\n",
"\" transform=\"scale(0.015625)\"/>\n",
" <path id=\"DejaVuSans-30\" d=\"M 2034 4250 \n",
"Q 1547 4250 1301 3770 \n",
"Q 1056 3291 1056 2328 \n",
"Q 1056 1369 1301 889 \n",
"Q 1547 409 2034 409 \n",
"Q 2525 409 2770 889 \n",
"Q 3016 1369 3016 2328 \n",
"Q 3016 3291 2770 3770 \n",
"Q 2525 4250 2034 4250 \n",
"z\n",
"M 2034 4750 \n",
"Q 2819 4750 3233 4129 \n",
"Q 3647 3509 3647 2328 \n",
"Q 3647 1150 3233 529 \n",
"Q 2819 -91 2034 -91 \n",
"Q 1250 -91 836 529 \n",
"Q 422 1150 422 2328 \n",
"Q 422 3509 836 4129 \n",
"Q 1250 4750 2034 4750 \n",
"z\n",
"\" transform=\"scale(0.015625)\"/>\n",
" </defs>\n",
" <use xlink:href=\"#DejaVuSans-31\"/>\n",
" <use xlink:href=\"#DejaVuSans-30\" x=\"63.623047\"/>\n",
" </g>\n",
" </g>\n",
" </g>\n",
" <g id=\"text_6\">\n",
" <!-- epoch -->\n",
" <g transform=\"translate(112.525 171.376563)scale(0.1 -0.1)\">\n",
" <defs>\n",
" <path id=\"DejaVuSans-65\" d=\"M 3597 1894 \n",
"L 3597 1613 \n",
"L 953 1613 \n",
"Q 991 1019 1311 708 \n",
"Q 1631 397 2203 397 \n",
"Q 2534 397 2845 478 \n",
"Q 3156 559 3463 722 \n",
"L 3463 178 \n",
"Q 3153 47 2828 -22 \n",
"Q 2503 -91 2169 -91 \n",
"Q 1331 -91 842 396 \n",
"Q 353 884 353 1716 \n",
"Q 353 2575 817 3079 \n",
"Q 1281 3584 2069 3584 \n",
"Q 2775 3584 3186 3129 \n",
"Q 3597 2675 3597 1894 \n",
"z\n",
"M 3022 2063 \n",
"Q 3016 2534 2758 2815 \n",
"Q 2500 3097 2075 3097 \n",
"Q 1594 3097 1305 2825 \n",
"Q 1016 2553 972 2059 \n",
"L 3022 2063 \n",
"z\n",
"\" transform=\"scale(0.015625)\"/>\n",
" <path id=\"DejaVuSans-70\" d=\"M 1159 525 \n",
"L 1159 -1331 \n",
"L 581 -1331 \n",
"L 581 3500 \n",
"L 1159 3500 \n",
"L 1159 2969 \n",
"Q 1341 3281 1617 3432 \n",
"Q 1894 3584 2278 3584 \n",
"Q 2916 3584 3314 3078 \n",
"Q 3713 2572 3713 1747 \n",
"Q 3713 922 3314 415 \n",
"Q 2916 -91 2278 -91 \n",
"Q 1894 -91 1617 61 \n",
"Q 1341 213 1159 525 \n",
"z\n",
"M 3116 1747 \n",
"Q 3116 2381 2855 2742 \n",
"Q 2594 3103 2138 3103 \n",
"Q 1681 3103 1420 2742 \n",
"Q 1159 2381 1159 1747 \n",
"Q 1159 1113 1420 752 \n",
"Q 1681 391 2138 391 \n",
"Q 2594 391 2855 752 \n",
"Q 3116 1113 3116 1747 \n",
"z\n",
"\" transform=\"scale(0.015625)\"/>\n",
" <path id=\"DejaVuSans-6f\" d=\"M 1959 3097 \n",
"Q 1497 3097 1228 2736 \n",
"Q 959 2375 959 1747 \n",
"Q 959 1119 1226 758 \n",
"Q 1494 397 1959 397 \n",
"Q 2419 397 2687 759 \n",
"Q 2956 1122 2956 1747 \n",
"Q 2956 2369 2687 2733 \n",
"Q 2419 3097 1959 3097 \n",
"z\n",
"M 1959 3584 \n",
"Q 2709 3584 3137 3096 \n",
"Q 3566 2609 3566 1747 \n",
"Q 3566 888 3137 398 \n",
"Q 2709 -91 1959 -91 \n",
"Q 1206 -91 779 398 \n",
"Q 353 888 353 1747 \n",
"Q 353 2609 779 3096 \n",
"Q 1206 3584 1959 3584 \n",
"z\n",
"\" transform=\"scale(0.015625)\"/>\n",
" <path id=\"DejaVuSans-63\" d=\"M 3122 3366 \n",
"L 3122 2828 \n",
"Q 2878 2963 2633 3030 \n",
"Q 2388 3097 2138 3097 \n",
"Q 1578 3097 1268 2742 \n",
"Q 959 2388 959 1747 \n",
"Q 959 1106 1268 751 \n",
"Q 1578 397 2138 397 \n",
"Q 2388 397 2633 464 \n",
"Q 2878 531 3122 666 \n",
"L 3122 134 \n",
"Q 2881 22 2623 -34 \n",
"Q 2366 -91 2075 -91 \n",
"Q 1284 -91 818 406 \n",
"Q 353 903 353 1747 \n",
"Q 353 2603 823 3093 \n",
"Q 1294 3584 2113 3584 \n",
"Q 2378 3584 2631 3529 \n",
"Q 2884 3475 3122 3366 \n",
"z\n",
"\" transform=\"scale(0.015625)\"/>\n",
" <path id=\"DejaVuSans-68\" d=\"M 3513 2113 \n",
"L 3513 0 \n",
"L 2938 0 \n",
"L 2938 2094 \n",
"Q 2938 2591 2744 2837 \n",
"Q 2550 3084 2163 3084 \n",
"Q 1697 3084 1428 2787 \n",
"Q 1159 2491 1159 1978 \n",
"L 1159 0 \n",
"L 581 0 \n",
"L 581 4863 \n",
"L 1159 4863 \n",
"L 1159 2956 \n",
"Q 1366 3272 1645 3428 \n",
"Q 1925 3584 2291 3584 \n",
"Q 2894 3584 3203 3211 \n",
"Q 3513 2838 3513 2113 \n",
"z\n",
"\" transform=\"scale(0.015625)\"/>\n",
" </defs>\n",
" <use xlink:href=\"#DejaVuSans-65\"/>\n",
" <use xlink:href=\"#DejaVuSans-70\" x=\"61.523438\"/>\n",
" <use xlink:href=\"#DejaVuSans-6f\" x=\"125\"/>\n",
" <use xlink:href=\"#DejaVuSans-63\" x=\"186.181641\"/>\n",
" <use xlink:href=\"#DejaVuSans-68\" x=\"241.162109\"/>\n",
" </g>\n",
" </g>\n",
" </g>\n",
" <g id=\"matplotlib.axis_2\">\n",
" <g id=\"ytick_1\">\n",
" <g id=\"line2d_11\">\n",
" <path d=\"M 30.103125 120.45 \n",
"L 225.403125 120.45 \n",
"\" clip-path=\"url(#p38f7277f50)\" style=\"fill: none; stroke: #b0b0b0; stroke-width: 0.8; stroke-linecap: square\"/>\n",
" </g>\n",
" <g id=\"line2d_12\">\n",
" <defs>\n",
" <path id=\"m0ca26dcbeb\" d=\"M 0 0 \n",
"L -3.5 0 \n",
"\" style=\"stroke: #000000; stroke-width: 0.8\"/>\n",
" </defs>\n",
" <g>\n",
" <use xlink:href=\"#m0ca26dcbeb\" x=\"30.103125\" y=\"120.45\" style=\"stroke: #000000; stroke-width: 0.8\"/>\n",
" </g>\n",
" </g>\n",
" <g id=\"text_7\">\n",
" <!-- 0.4 -->\n",
" <g transform=\"translate(7.2 124.249219)scale(0.1 -0.1)\">\n",
" <defs>\n",
" <path id=\"DejaVuSans-2e\" d=\"M 684 794 \n",
"L 1344 794 \n",
"L 1344 0 \n",
"L 684 0 \n",
"L 684 794 \n",
"z\n",
"\" transform=\"scale(0.015625)\"/>\n",
" </defs>\n",
" <use xlink:href=\"#DejaVuSans-30\"/>\n",
" <use xlink:href=\"#DejaVuSans-2e\" x=\"63.623047\"/>\n",
" <use xlink:href=\"#DejaVuSans-34\" x=\"95.410156\"/>\n",
" </g>\n",
" </g>\n",
" </g>\n",
" <g id=\"ytick_2\">\n",
" <g id=\"line2d_13\">\n",
" <path d=\"M 30.103125 75.15 \n",
"L 225.403125 75.15 \n",
"\" clip-path=\"url(#p38f7277f50)\" style=\"fill: none; stroke: #b0b0b0; stroke-width: 0.8; stroke-linecap: square\"/>\n",
" </g>\n",
" <g id=\"line2d_14\">\n",
" <g>\n",
" <use xlink:href=\"#m0ca26dcbeb\" x=\"30.103125\" y=\"75.15\" style=\"stroke: #000000; stroke-width: 0.8\"/>\n",
" </g>\n",
" </g>\n",
" <g id=\"text_8\">\n",
" <!-- 0.6 -->\n",
" <g transform=\"translate(7.2 78.949219)scale(0.1 -0.1)\">\n",
" <use xlink:href=\"#DejaVuSans-30\"/>\n",
" <use xlink:href=\"#DejaVuSans-2e\" x=\"63.623047\"/>\n",
" <use xlink:href=\"#DejaVuSans-36\" x=\"95.410156\"/>\n",
" </g>\n",
" </g>\n",
" </g>\n",
" <g id=\"ytick_3\">\n",
" <g id=\"line2d_15\">\n",
" <path d=\"M 30.103125 29.85 \n",
"L 225.403125 29.85 \n",
"\" clip-path=\"url(#p38f7277f50)\" style=\"fill: none; stroke: #b0b0b0; stroke-width: 0.8; stroke-linecap: square\"/>\n",
" </g>\n",
" <g id=\"line2d_16\">\n",
" <g>\n",
" <use xlink:href=\"#m0ca26dcbeb\" x=\"30.103125\" y=\"29.85\" style=\"stroke: #000000; stroke-width: 0.8\"/>\n",
" </g>\n",
" </g>\n",
" <g id=\"text_9\">\n",
" <!-- 0.8 -->\n",
" <g transform=\"translate(7.2 33.649219)scale(0.1 -0.1)\">\n",
" <use xlink:href=\"#DejaVuSans-30\"/>\n",
" <use xlink:href=\"#DejaVuSans-2e\" x=\"63.623047\"/>\n",
" <use xlink:href=\"#DejaVuSans-38\" x=\"95.410156\"/>\n",
" </g>\n",
" </g>\n",
" </g>\n",
" </g>\n",
" <g id=\"line2d_17\">\n",
" <path d=\"M 35.272884 -1 \n",
"L 51.803125 75.61021 \n",
"L 73.503125 93.672344 \n",
"L 95.203125 102.778348 \n",
"L 116.903125 107.632437 \n",
"L 138.603125 112.487156 \n",
"L 160.303125 116.4354 \n",
"L 182.003125 119.040329 \n",
"L 203.703125 121.424263 \n",
"L 225.403125 124.527028 \n",
"\" clip-path=\"url(#p38f7277f50)\" style=\"fill: none; stroke: #1f77b4; stroke-width: 1.5; stroke-linecap: square\"/>\n",
" </g>\n",
" <g id=\"line2d_18\">\n",
" <path d=\"M 30.103125 65.6219 \n",
"L 51.803125 32.179175 \n",
"L 73.503125 25.7881 \n",
"L 95.203125 22.432125 \n",
"L 116.903125 21.005175 \n",
"L 138.603125 18.959125 \n",
"L 160.303125 18.0418 \n",
"L 182.003125 17.124475 \n",
"L 203.703125 16.0939 \n",
"L 225.403125 15.08975 \n",
"\" clip-path=\"url(#p38f7277f50)\" style=\"fill: none; stroke-dasharray: 5.55,2.4; stroke-dashoffset: 0; stroke: #bf00bf; stroke-width: 1.5\"/>\n",
" </g>\n",
" <g id=\"line2d_19\">\n",
" <path d=\"M 30.103125 41.6733 \n",
"L 51.803125 32.77185 \n",
"L 73.503125 25.11615 \n",
"L 95.203125 23.84775 \n",
"L 116.903125 27.3585 \n",
"L 138.603125 22.5567 \n",
"L 160.303125 23.84775 \n",
"L 182.003125 19.49895 \n",
"L 203.703125 22.7832 \n",
"L 225.403125 21.1977 \n",
"\" clip-path=\"url(#p38f7277f50)\" style=\"fill: none; stroke-dasharray: 9.6,2.4,1.5,2.4; stroke-dashoffset: 0; stroke: #008000; stroke-width: 1.5\"/>\n",
" </g>\n",
" <g id=\"patch_3\">\n",
" <path d=\"M 30.103125 143.1 \n",
"L 30.103125 7.2 \n",
"\" style=\"fill: none; stroke: #000000; stroke-width: 0.8; stroke-linejoin: miter; stroke-linecap: square\"/>\n",
" </g>\n",
" <g id=\"patch_4\">\n",
" <path d=\"M 225.403125 143.1 \n",
"L 225.403125 7.2 \n",
"\" style=\"fill: none; stroke: #000000; stroke-width: 0.8; stroke-linejoin: miter; stroke-linecap: square\"/>\n",
" </g>\n",
" <g id=\"patch_5\">\n",
" <path d=\"M 30.103125 143.1 \n",
"L 225.403125 143.1 \n",
"\" style=\"fill: none; stroke: #000000; stroke-width: 0.8; stroke-linejoin: miter; stroke-linecap: square\"/>\n",
" </g>\n",
" <g id=\"patch_6\">\n",
" <path d=\"M 30.103125 7.2 \n",
"L 225.403125 7.2 \n",
"\" style=\"fill: none; stroke: #000000; stroke-width: 0.8; stroke-linejoin: miter; stroke-linecap: square\"/>\n",
" </g>\n",
" <g id=\"legend_1\">\n",
" <g id=\"patch_7\">\n",
" <path d=\"M 140.634375 98.667187 \n",
"L 218.403125 98.667187 \n",
"Q 220.403125 98.667187 220.403125 96.667187 \n",
"L 220.403125 53.632812 \n",
"Q 220.403125 51.632812 218.403125 51.632812 \n",
"L 140.634375 51.632812 \n",
"Q 138.634375 51.632812 138.634375 53.632812 \n",
"L 138.634375 96.667187 \n",
"Q 138.634375 98.667187 140.634375 98.667187 \n",
"z\n",
"\" style=\"fill: #ffffff; opacity: 0.8; stroke: #cccccc; stroke-linejoin: miter\"/>\n",
" </g>\n",
" <g id=\"line2d_20\">\n",
" <path d=\"M 142.634375 59.73125 \n",
"L 152.634375 59.73125 \n",
"L 162.634375 59.73125 \n",
"\" style=\"fill: none; stroke: #1f77b4; stroke-width: 1.5; stroke-linecap: square\"/>\n",
" </g>\n",
" <g id=\"text_10\">\n",
" <!-- train loss -->\n",
" <g transform=\"translate(170.634375 63.23125)scale(0.1 -0.1)\">\n",
" <defs>\n",
" <path id=\"DejaVuSans-74\" d=\"M 1172 4494 \n",
"L 1172 3500 \n",
"L 2356 3500 \n",
"L 2356 3053 \n",
"L 1172 3053 \n",
"L 1172 1153 \n",
"Q 1172 725 1289 603 \n",
"Q 1406 481 1766 481 \n",
"L 2356 481 \n",
"L 2356 0 \n",
"L 1766 0 \n",
"Q 1100 0 847 248 \n",
"Q 594 497 594 1153 \n",
"L 594 3053 \n",
"L 172 3053 \n",
"L 172 3500 \n",
"L 594 3500 \n",
"L 594 4494 \n",
"L 1172 4494 \n",
"z\n",
"\" transform=\"scale(0.015625)\"/>\n",
" <path id=\"DejaVuSans-72\" d=\"M 2631 2963 \n",
"Q 2534 3019 2420 3045 \n",
"Q 2306 3072 2169 3072 \n",
"Q 1681 3072 1420 2755 \n",
"Q 1159 2438 1159 1844 \n",
"L 1159 0 \n",
"L 581 0 \n",
"L 581 3500 \n",
"L 1159 3500 \n",
"L 1159 2956 \n",
"Q 1341 3275 1631 3429 \n",
"Q 1922 3584 2338 3584 \n",
"Q 2397 3584 2469 3576 \n",
"Q 2541 3569 2628 3553 \n",
"L 2631 2963 \n",
"z\n",
"\" transform=\"scale(0.015625)\"/>\n",
" <path id=\"DejaVuSans-61\" d=\"M 2194 1759 \n",
"Q 1497 1759 1228 1600 \n",
"Q 959 1441 959 1056 \n",
"Q 959 750 1161 570 \n",
"Q 1363 391 1709 391 \n",
"Q 2188 391 2477 730 \n",
"Q 2766 1069 2766 1631 \n",
"L 2766 1759 \n",
"L 2194 1759 \n",
"z\n",
"M 3341 1997 \n",
"L 3341 0 \n",
"L 2766 0 \n",
"L 2766 531 \n",
"Q 2569 213 2275 61 \n",
"Q 1981 -91 1556 -91 \n",
"Q 1019 -91 701 211 \n",
"Q 384 513 384 1019 \n",
"Q 384 1609 779 1909 \n",
"Q 1175 2209 1959 2209 \n",
"L 2766 2209 \n",
"L 2766 2266 \n",
"Q 2766 2663 2505 2880 \n",
"Q 2244 3097 1772 3097 \n",
"Q 1472 3097 1187 3025 \n",
"Q 903 2953 641 2809 \n",
"L 641 3341 \n",
"Q 956 3463 1253 3523 \n",
"Q 1550 3584 1831 3584 \n",
"Q 2591 3584 2966 3190 \n",
"Q 3341 2797 3341 1997 \n",
"z\n",
"\" transform=\"scale(0.015625)\"/>\n",
" <path id=\"DejaVuSans-69\" d=\"M 603 3500 \n",
"L 1178 3500 \n",
"L 1178 0 \n",
"L 603 0 \n",
"L 603 3500 \n",
"z\n",
"M 603 4863 \n",
"L 1178 4863 \n",
"L 1178 4134 \n",
"L 603 4134 \n",
"L 603 4863 \n",
"z\n",
"\" transform=\"scale(0.015625)\"/>\n",
" <path id=\"DejaVuSans-6e\" d=\"M 3513 2113 \n",
"L 3513 0 \n",
"L 2938 0 \n",
"L 2938 2094 \n",
"Q 2938 2591 2744 2837 \n",
"Q 2550 3084 2163 3084 \n",
"Q 1697 3084 1428 2787 \n",
"Q 1159 2491 1159 1978 \n",
"L 1159 0 \n",
"L 581 0 \n",
"L 581 3500 \n",
"L 1159 3500 \n",
"L 1159 2956 \n",
"Q 1366 3272 1645 3428 \n",
"Q 1925 3584 2291 3584 \n",
"Q 2894 3584 3203 3211 \n",
"Q 3513 2838 3513 2113 \n",
"z\n",
"\" transform=\"scale(0.015625)\"/>\n",
" <path id=\"DejaVuSans-20\" transform=\"scale(0.015625)\"/>\n",
" <path id=\"DejaVuSans-6c\" d=\"M 603 4863 \n",
"L 1178 4863 \n",
"L 1178 0 \n",
"L 603 0 \n",
"L 603 4863 \n",
"z\n",
"\" transform=\"scale(0.015625)\"/>\n",
" <path id=\"DejaVuSans-73\" d=\"M 2834 3397 \n",
"L 2834 2853 \n",
"Q 2591 2978 2328 3040 \n",
"Q 2066 3103 1784 3103 \n",
"Q 1356 3103 1142 2972 \n",
"Q 928 2841 928 2578 \n",
"Q 928 2378 1081 2264 \n",
"Q 1234 2150 1697 2047 \n",
"L 1894 2003 \n",
"Q 2506 1872 2764 1633 \n",
"Q 3022 1394 3022 966 \n",
"Q 3022 478 2636 193 \n",
"Q 2250 -91 1575 -91 \n",
"Q 1294 -91 989 -36 \n",
"Q 684 19 347 128 \n",
"L 347 722 \n",
"Q 666 556 975 473 \n",
"Q 1284 391 1588 391 \n",
"Q 1994 391 2212 530 \n",
"Q 2431 669 2431 922 \n",
"Q 2431 1156 2273 1281 \n",
"Q 2116 1406 1581 1522 \n",
"L 1381 1569 \n",
"Q 847 1681 609 1914 \n",
"Q 372 2147 372 2553 \n",
"Q 372 3047 722 3315 \n",
"Q 1072 3584 1716 3584 \n",
"Q 2034 3584 2315 3537 \n",
"Q 2597 3491 2834 3397 \n",
"z\n",
"\" transform=\"scale(0.015625)\"/>\n",
" </defs>\n",
" <use xlink:href=\"#DejaVuSans-74\"/>\n",
" <use xlink:href=\"#DejaVuSans-72\" x=\"39.208984\"/>\n",
" <use xlink:href=\"#DejaVuSans-61\" x=\"80.322266\"/>\n",
" <use xlink:href=\"#DejaVuSans-69\" x=\"141.601562\"/>\n",
" <use xlink:href=\"#DejaVuSans-6e\" x=\"169.384766\"/>\n",
" <use xlink:href=\"#DejaVuSans-20\" x=\"232.763672\"/>\n",
" <use xlink:href=\"#DejaVuSans-6c\" x=\"264.550781\"/>\n",
" <use xlink:href=\"#DejaVuSans-6f\" x=\"292.333984\"/>\n",
" <use xlink:href=\"#DejaVuSans-73\" x=\"353.515625\"/>\n",
" <use xlink:href=\"#DejaVuSans-73\" x=\"405.615234\"/>\n",
" </g>\n",
" </g>\n",
" <g id=\"line2d_21\">\n",
" <path d=\"M 142.634375 74.409375 \n",
"L 152.634375 74.409375 \n",
"L 162.634375 74.409375 \n",
"\" style=\"fill: none; stroke-dasharray: 5.55,2.4; stroke-dashoffset: 0; stroke: #bf00bf; stroke-width: 1.5\"/>\n",
" </g>\n",
" <g id=\"text_11\">\n",
" <!-- train acc -->\n",
" <g transform=\"translate(170.634375 77.909375)scale(0.1 -0.1)\">\n",
" <use xlink:href=\"#DejaVuSans-74\"/>\n",
" <use xlink:href=\"#DejaVuSans-72\" x=\"39.208984\"/>\n",
" <use xlink:href=\"#DejaVuSans-61\" x=\"80.322266\"/>\n",
" <use xlink:href=\"#DejaVuSans-69\" x=\"141.601562\"/>\n",
" <use xlink:href=\"#DejaVuSans-6e\" x=\"169.384766\"/>\n",
" <use xlink:href=\"#DejaVuSans-20\" x=\"232.763672\"/>\n",
" <use xlink:href=\"#DejaVuSans-61\" x=\"264.550781\"/>\n",
" <use xlink:href=\"#DejaVuSans-63\" x=\"325.830078\"/>\n",
" <use xlink:href=\"#DejaVuSans-63\" x=\"380.810547\"/>\n",
" </g>\n",
" </g>\n",
" <g id=\"line2d_22\">\n",
" <path d=\"M 142.634375 89.0875 \n",
"L 152.634375 89.0875 \n",
"L 162.634375 89.0875 \n",
"\" style=\"fill: none; stroke-dasharray: 9.6,2.4,1.5,2.4; stroke-dashoffset: 0; stroke: #008000; stroke-width: 1.5\"/>\n",
" </g>\n",
" <g id=\"text_12\">\n",
" <!-- test acc -->\n",
" <g transform=\"translate(170.634375 92.5875)scale(0.1 -0.1)\">\n",
" <use xlink:href=\"#DejaVuSans-74\"/>\n",
" <use xlink:href=\"#DejaVuSans-65\" x=\"39.208984\"/>\n",
" <use xlink:href=\"#DejaVuSans-73\" x=\"100.732422\"/>\n",
" <use xlink:href=\"#DejaVuSans-74\" x=\"152.832031\"/>\n",
" <use xlink:href=\"#DejaVuSans-20\" x=\"192.041016\"/>\n",
" <use xlink:href=\"#DejaVuSans-61\" x=\"223.828125\"/>\n",
" <use xlink:href=\"#DejaVuSans-63\" x=\"285.107422\"/>\n",
" <use xlink:href=\"#DejaVuSans-63\" x=\"340.087891\"/>\n",
" </g>\n",
" </g>\n",
" </g>\n",
" </g>\n",
" </g>\n",
" <defs>\n",
" <clipPath id=\"p38f7277f50\">\n",
" <rect x=\"30.103125\" y=\"7.2\" width=\"195.3\" height=\"135.9\"/>\n",
" </clipPath>\n",
" </defs>\n",
"</svg>\n"
],
"text/plain": [
"<Figure size 252x180 with 1 Axes>"
]
},
"metadata": {
"needs_background": "light"
},
"output_type": "display_data"
}
],
"source": [
"train_iter, test_iter = d2l.load_data_fashion_mnist(batch_size)\n",
"d2l.train_ch3(net, train_iter, test_iter, loss, num_epochs, trainer)"
]
},
{
"cell_type": "markdown",
"id": "9b636c57",
"metadata": {
"origin_pos": 16
},
"source": [
"## 小结\n",
"\n",
"* 我们可以使用高级API更简洁地实现多层感知机。\n",
"* 对于相同的分类问题,多层感知机的实现与softmax回归的实现相同,只是多层感知机的实现里增加了带有激活函数的隐藏层。\n",
"\n",
"## 练习\n",
"\n",
"1. 尝试添加不同数量的隐藏层(也可以修改学习率),怎么样设置效果最好?\n",
"1. 尝试不同的激活函数,哪个效果最好?\n",
"1. 尝试不同的方案来初始化权重,什么方法效果最好?\n"
]
},
{
"cell_type": "markdown",
"id": "36201fb3",
"metadata": {
"origin_pos": 18,
"tab": [
"pytorch"
]
},
"source": [
"[Discussions](https://discuss.d2l.ai/t/1802)\n"
]
}
],
"metadata": {
"language_info": {
"name": "python"
},
"required_libs": []
},
"nbformat": 4,
"nbformat_minor": 5
}
File diff suppressed because one or more lines are too long
File diff suppressed because it is too large Load Diff
File diff suppressed because it is too large Load Diff
File diff suppressed because it is too large Load Diff
File diff suppressed because it is too large Load Diff
@@ -0,0 +1,88 @@
{
"cells": [
{
"cell_type": "markdown",
"id": "2731ad59",
"metadata": {
"origin_pos": 0
},
"source": [
"# 针对序列级和词元级应用微调BERT\n",
":label:`sec_finetuning-bert`\n",
"\n",
"在本章的前几节中,我们为自然语言处理应用设计了不同的模型,例如基于循环神经网络、卷积神经网络、注意力和多层感知机。这些模型在有空间或时间限制的情况下是有帮助的,但是,为每个自然语言处理任务精心设计一个特定的模型实际上是不可行的。在 :numref:`sec_bert`中,我们介绍了一个名为BERT的预训练模型,该模型可以对广泛的自然语言处理任务进行最少的架构更改。一方面,在提出时,BERT改进了各种自然语言处理任务的技术水平。另一方面,正如在 :numref:`sec_bert-pretraining`中指出的那样,原始BERT模型的两个版本分别带有1.1亿和3.4亿个参数。因此,当有足够的计算资源时,我们可以考虑为下游自然语言处理应用微调BERT。\n",
"\n",
"下面,我们将自然语言处理应用的子集概括为序列级和词元级。在序列层次上,介绍了在单文本分类任务和文本对分类(或回归)任务中,如何将文本输入的BERT表示转换为输出标签。在词元级别,我们将简要介绍新的应用,如文本标注和问答,并说明BERT如何表示它们的输入并转换为输出标签。在微调期间,不同应用之间的BERT所需的“最小架构更改”是额外的全连接层。在下游应用的监督学习期间,额外层的参数是从零开始学习的,而预训练BERT模型中的所有参数都是微调的。\n",
"\n",
"## 单文本分类\n",
"\n",
"*单文本分类*将单个文本序列作为输入,并输出其分类结果。\n",
"除了我们在这一章中探讨的情感分析之外,语言可接受性语料库(Corpus of Linguistic AcceptabilityCOLA)也是一个单文本分类的数据集,它的要求判断给定的句子在语法上是否可以接受。 :cite:`Warstadt.Singh.Bowman.2019`。例如,“I should study.”是可以接受的,但是“I should studying.”不是可以接受的。\n",
"\n",
"![微调BERT用于单文本分类应用,如情感分析和测试语言可接受性(这里假设输入的单个文本有六个词元)](../img/bert-one-seq.svg)\n",
":label:`fig_bert-one-seq`\n",
"\n",
" :numref:`sec_bert`描述了BERT的输入表示。BERT输入序列明确地表示单个文本和文本对,其中特殊分类标记“&lt;cls&gt;”用于序列分类,而特殊分类标记“&lt;sep&gt;”标记单个文本的结束或分隔成对文本。如 :numref:`fig_bert-one-seq`所示,在单文本分类应用中,特殊分类标记“&lt;cls&gt;”的BERT表示对整个输入文本序列的信息进行编码。作为输入单个文本的表示,它将被送入到由全连接(稠密)层组成的小多层感知机中,以输出所有离散标签值的分布。\n",
"\n",
"## 文本对分类或回归\n",
"\n",
"在本章中,我们还研究了自然语言推断。它属于*文本对分类*,这是一种对文本进行分类的应用类型。\n",
"\n",
"以一对文本作为输入但输出连续值,*语义文本相似度*是一个流行的“文本对回归”任务。\n",
"这项任务评估句子的语义相似度。例如,在语义文本相似度基准数据集(Semantic Textual Similarity Benchmark)中,句子对的相似度得分是从0(无语义重叠)到5(语义等价)的分数区间 :cite:`Cer.Diab.Agirre.ea.2017`。我们的目标是预测这些分数。来自语义文本相似性基准数据集的样本包括(句子1,句子2,相似性得分):\n",
"\n",
"* \"A plane is taking off.\"(“一架飞机正在起飞。”),\"An air plane is taking off.\"(“一架飞机正在起飞。”),5.000分;\n",
"* \"A woman is eating something.\"(“一个女人在吃东西。”),\"A woman is eating meat.\"(“一个女人在吃肉。”),3.000分;\n",
"* \"A woman is dancing.\"(一个女人在跳舞。),\"A man is talking.\"(“一个人在说话。”),0.000分。\n",
"\n",
"![文本对分类或回归应用的BERT微调,如自然语言推断和语义文本相似性(假设输入文本对分别有两个词元和三个词元)](../img/bert-two-seqs.svg)\n",
":label:`fig_bert-two-seqs`\n",
"\n",
"与 :numref:`fig_bert-one-seq`中的单文本分类相比, :numref:`fig_bert-two-seqs`中的文本对分类的BERT微调在输入表示上有所不同。对于文本对回归任务(如语义文本相似性),可以应用细微的更改,例如输出连续的标签值和使用均方损失:它们在回归中很常见。\n",
"\n",
"## 文本标注\n",
"\n",
"现在让我们考虑词元级任务,比如*文本标注*(text tagging),其中每个词元都被分配了一个标签。在文本标注任务中,*词性标注*为每个单词分配词性标记(例如,形容词和限定词)。\n",
"根据单词在句子中的作用。如,在Penn树库II标注集中,句子“John Smiths car is new”应该被标记为“NNP(名词,专有单数)NNP POS(所有格结尾)NN(名词,单数或质量)VB(动词,基本形式)JJ(形容词)”。\n",
"\n",
"![文本标记应用的BERT微调,如词性标记。假设输入的单个文本有六个词元。](../img/bert-tagging.svg)\n",
":label:`fig_bert-tagging`\n",
"\n",
" :numref:`fig_bert-tagging`中说明了文本标记应用的BERT微调。与 :numref:`fig_bert-one-seq`相比,唯一的区别在于,在文本标注中,输入文本的*每个词元*的BERT表示被送到相同的额外全连接层中,以输出词元的标签,例如词性标签。\n",
"\n",
"## 问答\n",
"\n",
"作为另一个词元级应用,*问答*反映阅读理解能力。\n",
"例如,斯坦福问答数据集(Stanford Question Answering DatasetSQuAD v1.1)由阅读段落和问题组成,其中每个问题的答案只是段落中的一段文本(文本片段) :cite:`Rajpurkar.Zhang.Lopyrev.ea.2016`。举个例子,考虑一段话:“Some experts report that a mask's efficacy is inconclusive.However,mask makers insist that their products,such as N95 respirator masks,can guard against the virus.”(“一些专家报告说面罩的功效是不确定的。然而,口罩制造商坚持他们的产品,如N95口罩,可以预防病毒。”)还有一个问题“Who say that N95 respirator masks can guard against the virus?”(“谁说N95口罩可以预防病毒?”)。答案应该是文章中的文本片段“mask makers”(“口罩制造商”)。因此,SQuAD v1.1的目标是在给定问题和段落的情况下预测段落中文本片段的开始和结束。\n",
"\n",
"![对问答进行BERT微调(假设输入文本对分别有两个和三个词元)](../img/bert-qa.svg)\n",
":label:`fig_bert-qa`\n",
"\n",
"为了微调BERT进行问答,在BERT的输入中,将问题和段落分别作为第一个和第二个文本序列。为了预测文本片段开始的位置,相同的额外的全连接层将把来自位置$i$的任何词元的BERT表示转换成标量分数$s_i$。文章中所有词元的分数还通过softmax转换成概率分布,从而为文章中的每个词元位置$i$分配作为文本片段开始的概率$p_i$。预测文本片段的结束与上面相同,只是其额外的全连接层中的参数与用于预测开始位置的参数无关。当预测结束时,位置$i$的词元由相同的全连接层变换成标量分数$e_i$。 :numref:`fig_bert-qa`描述了用于问答的微调BERT。\n",
"\n",
"对于问答,监督学习的训练目标就像最大化真实值的开始和结束位置的对数似然一样简单。当预测片段时,我们可以计算从位置$i$到位置$j$的有效片段的分数$s_i + e_j$$i \\leq j$),并输出分数最高的跨度。\n",
"\n",
"## 小结\n",
"\n",
"* 对于序列级和词元级自然语言处理应用,BERT只需要最小的架构改变(额外的全连接层),如单个文本分类(例如,情感分析和测试语言可接受性)、文本对分类或回归(例如,自然语言推断和语义文本相似性)、文本标记(例如,词性标记)和问答。\n",
"* 在下游应用的监督学习期间,额外层的参数是从零开始学习的,而预训练BERT模型中的所有参数都是微调的。\n",
"\n",
"## 练习\n",
"\n",
"1. 让我们为新闻文章设计一个搜索引擎算法。当系统接收到查询(例如,“冠状病毒爆发期间的石油行业”)时,它应该返回与该查询最相关的新闻文章的排序列表。假设我们有一个巨大的新闻文章池和大量的查询。为了简化问题,假设为每个查询标记了最相关的文章。如何在算法设计中应用负采样(见 :numref:`subsec_negative-sampling`)和BERT\n",
"1. 我们如何利用BERT来训练语言模型?\n",
"1. 我们能在机器翻译中利用BERT吗?\n",
"\n",
"[Discussions](https://discuss.d2l.ai/t/5729)\n"
]
}
],
"metadata": {
"language_info": {
"name": "python"
},
"required_libs": []
},
"nbformat": 4,
"nbformat_minor": 5
}
@@ -0,0 +1,73 @@
{
"cells": [
{
"cell_type": "markdown",
"id": "cd1572d4",
"metadata": {
"origin_pos": 0
},
"source": [
"# 自然语言处理:应用\n",
":label:`chap_nlp_app`\n",
"\n",
"前面我们学习了如何在文本序列中表示词元,\n",
"并在 :numref:`chap_nlp_pretrain`中训练了词元的表示。\n",
"这样的预训练文本表示可以通过不同模型架构,放入不同的下游自然语言处理任务。\n",
"\n",
"前一章我们提及到一些自然语言处理应用,这些应用没有预训练,只是为了解释深度学习架构。\n",
"例如,在 :numref:`chap_rnn`中,\n",
"我们依赖循环神经网络设计语言模型来生成类似中篇小说的文本。\n",
"在 :numref:`chap_modern_rnn`和 :numref:`chap_attention`中,\n",
"我们还设计了基于循环神经网络和注意力机制的机器翻译模型。\n",
"\n",
"然而,本书并不打算全面涵盖所有此类应用。\n",
"相反,我们的重点是*如何应用深度语言表征学习来解决自然语言处理问题*。\n",
"在给定预训练的文本表示的情况下,\n",
"本章将探讨两种流行且具有代表性的下游自然语言处理任务:\n",
"情感分析和自然语言推断,它们分别分析单个文本和文本对之间的关系。\n",
"\n",
"![预训练文本表示可以通过不同模型架构,放入不同的下游自然语言处理应用(本章重点介绍如何为不同的下游应用设计模型)](../img/nlp-map-app.svg)\n",
":label:`fig_nlp-map-app`\n",
"\n",
"如 :numref:`fig_nlp-map-app`所述,\n",
"本章将重点描述然后使用不同类型的深度学习架构\n",
"(如多层感知机、卷积神经网络、循环神经网络和注意力)\n",
"设计自然语言处理模型。\n",
"尽管在 :numref:`fig_nlp-map-app`中,\n",
"可以将任何预训练的文本表示与任何应用的架构相结合,\n",
"但我们选择了一些具有代表性的组合。\n",
"具体来说,我们将探索基于循环神经网络和卷积神经网络的流行架构进行情感分析。\n",
"对于自然语言推断,我们选择注意力和多层感知机来演示如何分析文本对。\n",
"最后,我们介绍了如何为广泛的自然语言处理应用,\n",
"如在序列级(单文本分类和文本对分类)和词元级(文本标注和问答)上\n",
"对预训练BERT模型进行微调。\n",
"作为一个具体的经验案例,我们将针对自然语言推断对BERT进行微调。\n",
"\n",
"正如我们在 :numref:`sec_bert`中介绍的那样,\n",
"对于广泛的自然语言处理应用,BERT只需要最少的架构更改。\n",
"然而,这一好处是以微调下游应用的大量BERT参数为代价的。\n",
"当空间或时间有限时,基于多层感知机、卷积神经网络、循环神经网络\n",
"和注意力的精心构建的模型更具可行性。\n",
"下面,我们从情感分析应用开始,分别解读基于循环神经网络和卷积神经网络的模型设计。\n",
"\n",
":begin_tab:toc\n",
" - [sentiment-analysis-and-dataset](sentiment-analysis-and-dataset.ipynb)\n",
" - [sentiment-analysis-rnn](sentiment-analysis-rnn.ipynb)\n",
" - [sentiment-analysis-cnn](sentiment-analysis-cnn.ipynb)\n",
" - [natural-language-inference-and-dataset](natural-language-inference-and-dataset.ipynb)\n",
" - [natural-language-inference-attention](natural-language-inference-attention.ipynb)\n",
" - [finetuning-bert](finetuning-bert.ipynb)\n",
" - [natural-language-inference-bert](natural-language-inference-bert.ipynb)\n",
":end_tab:\n"
]
}
],
"metadata": {
"language_info": {
"name": "python"
},
"required_libs": []
},
"nbformat": 4,
"nbformat_minor": 5
}
@@ -0,0 +1,479 @@
{
"cells": [
{
"cell_type": "markdown",
"id": "15c5cd33",
"metadata": {
"origin_pos": 0
},
"source": [
"# 自然语言推断与数据集\n",
":label:`sec_natural-language-inference-and-dataset`\n",
"\n",
"在 :numref:`sec_sentiment`中,我们讨论了情感分析问题。这个任务的目的是将单个文本序列分类到预定义的类别中,例如一组情感极性中。然而,当需要决定一个句子是否可以从另一个句子推断出来,或者需要通过识别语义等价的句子来消除句子间冗余时,知道如何对一个文本序列进行分类是不够的。相反,我们需要能够对成对的文本序列进行推断。\n",
"\n",
"## 自然语言推断\n",
"\n",
"*自然语言推断*natural language inference)主要研究\n",
"*假设*hypothesis)是否可以从*前提*premise)中推断出来,\n",
"其中两者都是文本序列。\n",
"换言之,自然语言推断决定了一对文本序列之间的逻辑关系。这类关系通常分为三种类型:\n",
"\n",
"* *蕴涵*entailment):假设可以从前提中推断出来。\n",
"* *矛盾*contradiction):假设的否定可以从前提中推断出来。\n",
"* *中性*neutral):所有其他情况。\n",
"\n",
"自然语言推断也被称为识别文本蕴涵任务。\n",
"例如,下面的一个文本对将被贴上“蕴涵”的标签,因为假设中的“表白”可以从前提中的“拥抱”中推断出来。\n",
"\n",
">前提:两个女人拥抱在一起。\n",
"\n",
">假设:两个女人在示爱。\n",
"\n",
"下面是一个“矛盾”的例子,因为“运行编码示例”表示“不睡觉”,而不是“睡觉”。\n",
"\n",
">前提:一名男子正在运行Dive Into Deep Learning的编码示例。\n",
"\n",
">假设:该男子正在睡觉。\n",
"\n",
"第三个例子显示了一种“中性”关系,因为“正在为我们表演”这一事实无法推断出“出名”或“不出名”。\n",
"\n",
">前提:音乐家们正在为我们表演。\n",
"\n",
">假设:音乐家很有名。\n",
"\n",
"自然语言推断一直是理解自然语言的中心话题。它有着广泛的应用,从信息检索到开放领域的问答。为了研究这个问题,我们将首先研究一个流行的自然语言推断基准数据集。\n",
"\n",
"## 斯坦福自然语言推断(SNLI)数据集\n",
"\n",
"[**斯坦福自然语言推断语料库(Stanford Natural Language InferenceSNLI**]是由500000多个带标签的英语句子对组成的集合 :cite:`Bowman.Angeli.Potts.ea.2015`。我们在路径`../data/snli_1.0`中下载并存储提取的SNLI数据集。\n"
]
},
{
"cell_type": "code",
"execution_count": 1,
"id": "85ccbfd4",
"metadata": {
"execution": {
"iopub.execute_input": "2023-08-18T07:06:00.201212Z",
"iopub.status.busy": "2023-08-18T07:06:00.200144Z",
"iopub.status.idle": "2023-08-18T07:06:09.370822Z",
"shell.execute_reply": "2023-08-18T07:06:09.368591Z"
},
"origin_pos": 2,
"tab": [
"pytorch"
]
},
"outputs": [],
"source": [
"import os\n",
"import re\n",
"import torch\n",
"from torch import nn\n",
"from d2l import torch as d2l\n",
"\n",
"#@save\n",
"d2l.DATA_HUB['SNLI'] = (\n",
" 'https://nlp.stanford.edu/projects/snli/snli_1.0.zip',\n",
" '9fcde07509c7e87ec61c640c1b2753d9041758e4')\n",
"\n",
"data_dir = d2l.download_extract('SNLI')"
]
},
{
"cell_type": "markdown",
"id": "5e647396",
"metadata": {
"origin_pos": 4
},
"source": [
"### [**读取数据集**]\n",
"\n",
"原始的SNLI数据集包含的信息比我们在实验中真正需要的信息丰富得多。因此,我们定义函数`read_snli`以仅提取数据集的一部分,然后返回前提、假设及其标签的列表。\n"
]
},
{
"cell_type": "code",
"execution_count": 2,
"id": "fa839f80",
"metadata": {
"execution": {
"iopub.execute_input": "2023-08-18T07:06:09.377922Z",
"iopub.status.busy": "2023-08-18T07:06:09.377380Z",
"iopub.status.idle": "2023-08-18T07:06:09.392203Z",
"shell.execute_reply": "2023-08-18T07:06:09.390984Z"
},
"origin_pos": 5,
"tab": [
"pytorch"
]
},
"outputs": [],
"source": [
"#@save\n",
"def read_snli(data_dir, is_train):\n",
" \"\"\"将SNLI数据集解析为前提、假设和标签\"\"\"\n",
" def extract_text(s):\n",
" # 删除我们不会使用的信息\n",
" s = re.sub('\\\\(', '', s)\n",
" s = re.sub('\\\\)', '', s)\n",
" # 用一个空格替换两个或多个连续的空格\n",
" s = re.sub('\\\\s{2,}', ' ', s)\n",
" return s.strip()\n",
" label_set = {'entailment': 0, 'contradiction': 1, 'neutral': 2}\n",
" file_name = os.path.join(data_dir, 'snli_1.0_train.txt'\n",
" if is_train else 'snli_1.0_test.txt')\n",
" with open(file_name, 'r') as f:\n",
" rows = [row.split('\\t') for row in f.readlines()[1:]]\n",
" premises = [extract_text(row[1]) for row in rows if row[0] in label_set]\n",
" hypotheses = [extract_text(row[2]) for row in rows if row[0] \\\n",
" in label_set]\n",
" labels = [label_set[row[0]] for row in rows if row[0] in label_set]\n",
" return premises, hypotheses, labels"
]
},
{
"cell_type": "markdown",
"id": "607a64fd",
"metadata": {
"origin_pos": 6
},
"source": [
"现在让我们[**打印前3对**]前提和假设,以及它们的标签(“0”“1”和“2”分别对应于“蕴涵”“矛盾”和“中性”)。\n"
]
},
{
"cell_type": "code",
"execution_count": 3,
"id": "19101f9e",
"metadata": {
"execution": {
"iopub.execute_input": "2023-08-18T07:06:09.397297Z",
"iopub.status.busy": "2023-08-18T07:06:09.396407Z",
"iopub.status.idle": "2023-08-18T07:06:23.206512Z",
"shell.execute_reply": "2023-08-18T07:06:23.205574Z"
},
"origin_pos": 7,
"tab": [
"pytorch"
]
},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"前提: A person on a horse jumps over a broken down airplane .\n",
"假设: A person is training his horse for a competition .\n",
"标签: 2\n",
"前提: A person on a horse jumps over a broken down airplane .\n",
"假设: A person is at a diner , ordering an omelette .\n",
"标签: 1\n",
"前提: A person on a horse jumps over a broken down airplane .\n",
"假设: A person is outdoors , on a horse .\n",
"标签: 0\n"
]
}
],
"source": [
"train_data = read_snli(data_dir, is_train=True)\n",
"for x0, x1, y in zip(train_data[0][:3], train_data[1][:3], train_data[2][:3]):\n",
" print('前提:', x0)\n",
" print('假设:', x1)\n",
" print('标签:', y)"
]
},
{
"cell_type": "markdown",
"id": "f09b2cf4",
"metadata": {
"origin_pos": 8
},
"source": [
"训练集约有550000对,测试集约有10000对。下面显示了训练集和测试集中的三个[**标签“蕴涵”“矛盾”和“中性”是平衡的**]。\n"
]
},
{
"cell_type": "code",
"execution_count": 4,
"id": "972ca3d1",
"metadata": {
"execution": {
"iopub.execute_input": "2023-08-18T07:06:23.210300Z",
"iopub.status.busy": "2023-08-18T07:06:23.209728Z",
"iopub.status.idle": "2023-08-18T07:06:23.531128Z",
"shell.execute_reply": "2023-08-18T07:06:23.530246Z"
},
"origin_pos": 9,
"tab": [
"pytorch"
]
},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"[183416, 183187, 182764]\n",
"[3368, 3237, 3219]\n"
]
}
],
"source": [
"test_data = read_snli(data_dir, is_train=False)\n",
"for data in [train_data, test_data]:\n",
" print([[row for row in data[2]].count(i) for i in range(3)])"
]
},
{
"cell_type": "markdown",
"id": "e7ab2708",
"metadata": {
"origin_pos": 10
},
"source": [
"### [**定义用于加载数据集的类**]\n",
"\n",
"下面我们来定义一个用于加载SNLI数据集的类。类构造函数中的变量`num_steps`指定文本序列的长度,使得每个小批量序列将具有相同的形状。换句话说,在较长序列中的前`num_steps`个标记之后的标记被截断,而特殊标记“&lt;pad&gt;”将被附加到较短的序列后,直到它们的长度变为`num_steps`。通过实现`__getitem__`功能,我们可以任意访问带有索引`idx`的前提、假设和标签。\n"
]
},
{
"cell_type": "code",
"execution_count": 5,
"id": "b8b15f65",
"metadata": {
"execution": {
"iopub.execute_input": "2023-08-18T07:06:23.534933Z",
"iopub.status.busy": "2023-08-18T07:06:23.534365Z",
"iopub.status.idle": "2023-08-18T07:06:23.542550Z",
"shell.execute_reply": "2023-08-18T07:06:23.541714Z"
},
"origin_pos": 12,
"tab": [
"pytorch"
]
},
"outputs": [],
"source": [
"#@save\n",
"class SNLIDataset(torch.utils.data.Dataset):\n",
" \"\"\"用于加载SNLI数据集的自定义数据集\"\"\"\n",
" def __init__(self, dataset, num_steps, vocab=None):\n",
" self.num_steps = num_steps\n",
" all_premise_tokens = d2l.tokenize(dataset[0])\n",
" all_hypothesis_tokens = d2l.tokenize(dataset[1])\n",
" if vocab is None:\n",
" self.vocab = d2l.Vocab(all_premise_tokens + \\\n",
" all_hypothesis_tokens, min_freq=5, reserved_tokens=['<pad>'])\n",
" else:\n",
" self.vocab = vocab\n",
" self.premises = self._pad(all_premise_tokens)\n",
" self.hypotheses = self._pad(all_hypothesis_tokens)\n",
" self.labels = torch.tensor(dataset[2])\n",
" print('read ' + str(len(self.premises)) + ' examples')\n",
"\n",
" def _pad(self, lines):\n",
" return torch.tensor([d2l.truncate_pad(\n",
" self.vocab[line], self.num_steps, self.vocab['<pad>'])\n",
" for line in lines])\n",
"\n",
" def __getitem__(self, idx):\n",
" return (self.premises[idx], self.hypotheses[idx]), self.labels[idx]\n",
"\n",
" def __len__(self):\n",
" return len(self.premises)"
]
},
{
"cell_type": "markdown",
"id": "f5efd5df",
"metadata": {
"origin_pos": 14
},
"source": [
"### [**整合代码**]\n",
"\n",
"现在,我们可以调用`read_snli`函数和`SNLIDataset`类来下载SNLI数据集,并返回训练集和测试集的`DataLoader`实例,以及训练集的词表。值得注意的是,我们必须使用从训练集构造的词表作为测试集的词表。因此,在训练集中训练的模型将不知道来自测试集的任何新词元。\n"
]
},
{
"cell_type": "code",
"execution_count": 6,
"id": "96c46f53",
"metadata": {
"execution": {
"iopub.execute_input": "2023-08-18T07:06:23.546033Z",
"iopub.status.busy": "2023-08-18T07:06:23.545509Z",
"iopub.status.idle": "2023-08-18T07:06:23.551107Z",
"shell.execute_reply": "2023-08-18T07:06:23.550286Z"
},
"origin_pos": 16,
"tab": [
"pytorch"
]
},
"outputs": [],
"source": [
"#@save\n",
"def load_data_snli(batch_size, num_steps=50):\n",
" \"\"\"下载SNLI数据集并返回数据迭代器和词表\"\"\"\n",
" num_workers = d2l.get_dataloader_workers()\n",
" data_dir = d2l.download_extract('SNLI')\n",
" train_data = read_snli(data_dir, True)\n",
" test_data = read_snli(data_dir, False)\n",
" train_set = SNLIDataset(train_data, num_steps)\n",
" test_set = SNLIDataset(test_data, num_steps, train_set.vocab)\n",
" train_iter = torch.utils.data.DataLoader(train_set, batch_size,\n",
" shuffle=True,\n",
" num_workers=num_workers)\n",
" test_iter = torch.utils.data.DataLoader(test_set, batch_size,\n",
" shuffle=False,\n",
" num_workers=num_workers)\n",
" return train_iter, test_iter, train_set.vocab"
]
},
{
"cell_type": "markdown",
"id": "16d0cddb",
"metadata": {
"origin_pos": 18
},
"source": [
"在这里,我们将批量大小设置为128时,将序列长度设置为50,并调用`load_data_snli`函数来获取数据迭代器和词表。然后我们打印词表大小。\n"
]
},
{
"cell_type": "code",
"execution_count": 7,
"id": "08d0c755",
"metadata": {
"execution": {
"iopub.execute_input": "2023-08-18T07:06:23.554839Z",
"iopub.status.busy": "2023-08-18T07:06:23.554288Z",
"iopub.status.idle": "2023-08-18T07:07:02.488484Z",
"shell.execute_reply": "2023-08-18T07:07:02.487658Z"
},
"origin_pos": 19,
"tab": [
"pytorch"
]
},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"read 549367 examples\n"
]
},
{
"name": "stdout",
"output_type": "stream",
"text": [
"read 9824 examples\n"
]
},
{
"data": {
"text/plain": [
"18678"
]
},
"execution_count": 7,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"train_iter, test_iter, vocab = load_data_snli(128, 50)\n",
"len(vocab)"
]
},
{
"cell_type": "markdown",
"id": "783f8d2d",
"metadata": {
"origin_pos": 20
},
"source": [
"现在我们打印第一个小批量的形状。与情感分析相反,我们有分别代表前提和假设的两个输入`X[0]`和`X[1]`。\n"
]
},
{
"cell_type": "code",
"execution_count": 8,
"id": "d7411a33",
"metadata": {
"execution": {
"iopub.execute_input": "2023-08-18T07:07:02.492220Z",
"iopub.status.busy": "2023-08-18T07:07:02.491909Z",
"iopub.status.idle": "2023-08-18T07:07:02.966465Z",
"shell.execute_reply": "2023-08-18T07:07:02.965137Z"
},
"origin_pos": 21,
"tab": [
"pytorch"
]
},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"torch.Size([128, 50])\n",
"torch.Size([128, 50])\n",
"torch.Size([128])\n"
]
}
],
"source": [
"for X, Y in train_iter:\n",
" print(X[0].shape)\n",
" print(X[1].shape)\n",
" print(Y.shape)\n",
" break"
]
},
{
"cell_type": "markdown",
"id": "2cdcfd40",
"metadata": {
"origin_pos": 22
},
"source": [
"## 小结\n",
"\n",
"* 自然语言推断研究“假设”是否可以从“前提”推断出来,其中两者都是文本序列。\n",
"* 在自然语言推断中,前提和假设之间的关系包括蕴涵关系、矛盾关系和中性关系。\n",
"* 斯坦福自然语言推断(SNLI)语料库是一个比较流行的自然语言推断基准数据集。\n",
"\n",
"## 练习\n",
"\n",
"1. 机器翻译长期以来一直是基于翻译输出和翻译真实值之间的表面$n$元语法匹配来进行评估的。可以设计一种用自然语言推断来评价机器翻译结果的方法吗?\n",
"1. 我们如何更改超参数以减小词表大小?\n"
]
},
{
"cell_type": "markdown",
"id": "d452fb1d",
"metadata": {
"origin_pos": 24,
"tab": [
"pytorch"
]
},
"source": [
"[Discussions](https://discuss.d2l.ai/t/5722)\n"
]
}
],
"metadata": {
"language_info": {
"name": "python"
},
"required_libs": []
},
"nbformat": 4,
"nbformat_minor": 5
}
@@ -0,0 +1,108 @@
{
"cells": [
{
"cell_type": "markdown",
"id": "4eb98fe2",
"metadata": {
"origin_pos": 0
},
"source": [
"# 近似训练\n",
":label:`sec_approx_train`\n",
"\n",
"回想一下我们在 :numref:`sec_word2vec`中的讨论。跳元模型的主要思想是使用softmax运算来计算基于给定的中心词$w_c$生成上下文字$w_o$的条件概率(如 :eqref:`eq_skip-gram-softmax`),对应的对数损失在 :eqref:`eq_skip-gram-log`给出。\n",
"\n",
"由于softmax操作的性质,上下文词可以是词表$\\mathcal{V}$中的任意项, :eqref:`eq_skip-gram-log`包含与整个词表大小一样多的项的求和。因此, :eqref:`eq_skip-gram-grad`中跳元模型的梯度计算和 :eqref:`eq_cbow-gradient`中的连续词袋模型的梯度计算都包含求和。不幸的是,在一个词典上(通常有几十万或数百万个单词)求和的梯度的计算成本是巨大的!\n",
"\n",
"为了降低上述计算复杂度,本节将介绍两种近似训练方法:*负采样*和*分层softmax*。\n",
"由于跳元模型和连续词袋模型的相似性,我们将以跳元模型为例来描述这两种近似训练方法。\n",
"\n",
"## 负采样\n",
":label:`subsec_negative-sampling`\n",
"\n",
"负采样修改了原目标函数。给定中心词$w_c$的上下文窗口,任意上下文词$w_o$来自该上下文窗口的被认为是由下式建模概率的事件:\n",
"\n",
"$$P(D=1\\mid w_c, w_o) = \\sigma(\\mathbf{u}_o^\\top \\mathbf{v}_c),$$\n",
"\n",
"其中$\\sigma$使用了sigmoid激活函数的定义:\n",
"\n",
"$$\\sigma(x) = \\frac{1}{1+\\exp(-x)}.$$\n",
":eqlabel:`eq_sigma-f`\n",
"\n",
"让我们从最大化文本序列中所有这些事件的联合概率开始训练词嵌入。具体而言,给定长度为$T$的文本序列,以$w^{(t)}$表示时间步$t$的词,并使上下文窗口为$m$,考虑最大化联合概率:\n",
"\n",
"$$ \\prod_{t=1}^{T} \\prod_{-m \\leq j \\leq m,\\ j \\neq 0} P(D=1\\mid w^{(t)}, w^{(t+j)}).$$\n",
":eqlabel:`eq-negative-sample-pos`\n",
"\n",
"然而, :eqref:`eq-negative-sample-pos`只考虑那些正样本的事件。仅当所有词向量都等于无穷大时, :eqref:`eq-negative-sample-pos`中的联合概率才最大化为1。当然,这样的结果毫无意义。为了使目标函数更有意义,*负采样*添加从预定义分布中采样的负样本。\n",
"\n",
"用$S$表示上下文词$w_o$来自中心词$w_c$的上下文窗口的事件。对于这个涉及$w_o$的事件,从预定义分布$P(w)$中采样$K$个不是来自这个上下文窗口*噪声词*。用$N_k$表示噪声词$w_k$$k=1, \\ldots, K$)不是来自$w_c$的上下文窗口的事件。假设正例和负例$S, N_1, \\ldots, N_K$的这些事件是相互独立的。负采样将 :eqref:`eq-negative-sample-pos`中的联合概率(仅涉及正例)重写为\n",
"\n",
"$$ \\prod_{t=1}^{T} \\prod_{-m \\leq j \\leq m,\\ j \\neq 0} P(w^{(t+j)} \\mid w^{(t)}),$$\n",
"\n",
"通过事件$S, N_1, \\ldots, N_K$近似条件概率:\n",
"\n",
"$$ P(w^{(t+j)} \\mid w^{(t)}) =P(D=1\\mid w^{(t)}, w^{(t+j)})\\prod_{k=1,\\ w_k \\sim P(w)}^K P(D=0\\mid w^{(t)}, w_k).$$\n",
":eqlabel:`eq-negative-sample-conditional-prob`\n",
"\n",
"分别用$i_t$和$h_k$表示词$w^{(t)}$和噪声词$w_k$在文本序列的时间步$t$处的索引。 :eqref:`eq-negative-sample-conditional-prob`中关于条件概率的对数损失为:\n",
"\n",
"$$\n",
"\\begin{aligned}\n",
"-\\log P(w^{(t+j)} \\mid w^{(t)})\n",
"=& -\\log P(D=1\\mid w^{(t)}, w^{(t+j)}) - \\sum_{k=1,\\ w_k \\sim P(w)}^K \\log P(D=0\\mid w^{(t)}, w_k)\\\\\n",
"=&- \\log\\, \\sigma\\left(\\mathbf{u}_{i_{t+j}}^\\top \\mathbf{v}_{i_t}\\right) - \\sum_{k=1,\\ w_k \\sim P(w)}^K \\log\\left(1-\\sigma\\left(\\mathbf{u}_{h_k}^\\top \\mathbf{v}_{i_t}\\right)\\right)\\\\\n",
"=&- \\log\\, \\sigma\\left(\\mathbf{u}_{i_{t+j}}^\\top \\mathbf{v}_{i_t}\\right) - \\sum_{k=1,\\ w_k \\sim P(w)}^K \\log\\sigma\\left(-\\mathbf{u}_{h_k}^\\top \\mathbf{v}_{i_t}\\right).\n",
"\\end{aligned}\n",
"$$\n",
"\n",
"我们可以看到,现在每个训练步的梯度计算成本与词表大小无关,而是线性依赖于$K$。当将超参数$K$设置为较小的值时,在负采样的每个训练步处的梯度的计算成本较小。\n",
"\n",
"## 层序Softmax\n",
"\n",
"作为另一种近似训练方法,*层序Softmax*hierarchical softmax)使用二叉树( :numref:`fig_hi_softmax`中说明的数据结构),其中树的每个叶节点表示词表$\\mathcal{V}$中的一个词。\n",
"\n",
"![用于近似训练的分层softmax,其中树的每个叶节点表示词表中的一个词](../img/hi-softmax.svg)\n",
":label:`fig_hi_softmax`\n",
"\n",
"用$L(w)$表示二叉树中表示字$w$的从根节点到叶节点的路径上的节点数(包括两端)。设$n(w,j)$为该路径上的$j^\\mathrm{th}$节点,其上下文字向量为$\\mathbf{u}_{n(w, j)}$。例如, :numref:`fig_hi_softmax`中的$L(w_3) = 4$。分层softmax将 :eqref:`eq_skip-gram-softmax`中的条件概率近似为\n",
"\n",
"$$P(w_o \\mid w_c) = \\prod_{j=1}^{L(w_o)-1} \\sigma\\left( [\\![ n(w_o, j+1) = \\text{leftChild}(n(w_o, j)) ]\\!] \\cdot \\mathbf{u}_{n(w_o, j)}^\\top \\mathbf{v}_c\\right),$$\n",
"\n",
"其中函数$\\sigma$在 :eqref:`eq_sigma-f`中定义,$\\text{leftChild}(n)$是节点$n$的左子节点:如果$x$为真,$[\\![x]\\!] = 1$;否则$[\\![x]\\!] = -1$。\n",
"\n",
"为了说明,让我们计算 :numref:`fig_hi_softmax`中给定词$w_c$生成词$w_3$的条件概率。这需要$w_c$的词向量$\\mathbf{v}_c$和从根到$w_3$的路径( :numref:`fig_hi_softmax`中加粗的路径)上的非叶节点向量之间的点积,该路径依次向左、向右和向左遍历:\n",
"\n",
"$$P(w_3 \\mid w_c) = \\sigma(\\mathbf{u}_{n(w_3, 1)}^\\top \\mathbf{v}_c) \\cdot \\sigma(-\\mathbf{u}_{n(w_3, 2)}^\\top \\mathbf{v}_c) \\cdot \\sigma(\\mathbf{u}_{n(w_3, 3)}^\\top \\mathbf{v}_c).$$\n",
"\n",
"由$\\sigma(x)+\\sigma(-x) = 1$,它认为基于任意词$w_c$生成词表$\\mathcal{V}$中所有词的条件概率总和为1\n",
"\n",
"$$\\sum_{w \\in \\mathcal{V}} P(w \\mid w_c) = 1.$$\n",
":eqlabel:`eq_hi-softmax-sum-one`\n",
"\n",
"幸运的是,由于二叉树结构,$L(w_o)-1$大约与$\\mathcal{O}(\\text{log}_2|\\mathcal{V}|)$是一个数量级。当词表大小$\\mathcal{V}$很大时,与没有近似训练的相比,使用分层softmax的每个训练步的计算代价显著降低。\n",
"\n",
"## 小结\n",
"\n",
"* 负采样通过考虑相互独立的事件来构造损失函数,这些事件同时涉及正例和负例。训练的计算量与每一步的噪声词数成线性关系。\n",
"* 分层softmax使用二叉树中从根节点到叶节点的路径构造损失函数。训练的计算成本取决于词表大小的对数。\n",
"\n",
"## 练习\n",
"\n",
"1. 如何在负采样中对噪声词进行采样?\n",
"1. 验证 :eqref:`eq_hi-softmax-sum-one`是否有效。\n",
"1. 如何分别使用负采样和分层softmax训练连续词袋模型?\n",
"\n",
"[Discussions](https://discuss.d2l.ai/t/5741)\n"
]
}
],
"metadata": {
"language_info": {
"name": "python"
},
"required_libs": []
},
"nbformat": 4,
"nbformat_minor": 5
}
@@ -0,0 +1,581 @@
{
"cells": [
{
"cell_type": "markdown",
"id": "e6875f27",
"metadata": {
"origin_pos": 0
},
"source": [
"# 用于预训练BERT的数据集\n",
":label:`sec_bert-dataset`\n",
"\n",
"为了预训练 :numref:`sec_bert`中实现的BERT模型,我们需要以理想的格式生成数据集,以便于两个预训练任务:遮蔽语言模型和下一句预测。一方面,最初的BERT模型是在两个庞大的图书语料库和英语维基百科(参见 :numref:`subsec_bert_pretraining_tasks`)的合集上预训练的,但它很难吸引这本书的大多数读者。另一方面,现成的预训练BERT模型可能不适合医学等特定领域的应用。因此,在定制的数据集上对BERT进行预训练变得越来越流行。为了方便BERT预训练的演示,我们使用了较小的语料库WikiText-2 :cite:`Merity.Xiong.Bradbury.ea.2016`。\n",
"\n",
"与 :numref:`sec_word2vec_data`中用于预训练word2vec的PTB数据集相比,WikiText-2(1)保留了原来的标点符号,适合于下一句预测;(2)保留了原来的大小写和数字;(3)大了一倍以上。\n"
]
},
{
"cell_type": "code",
"execution_count": 1,
"id": "342b7589",
"metadata": {
"execution": {
"iopub.execute_input": "2023-08-18T07:00:38.284931Z",
"iopub.status.busy": "2023-08-18T07:00:38.284353Z",
"iopub.status.idle": "2023-08-18T07:00:41.113963Z",
"shell.execute_reply": "2023-08-18T07:00:41.112838Z"
},
"origin_pos": 2,
"tab": [
"pytorch"
]
},
"outputs": [],
"source": [
"import os\n",
"import random\n",
"import torch\n",
"from d2l import torch as d2l"
]
},
{
"cell_type": "markdown",
"id": "691a2248",
"metadata": {
"origin_pos": 4
},
"source": [
"在WikiText-2数据集中,每行代表一个段落,其中在任意标点符号及其前面的词元之间插入空格。保留至少有两句话的段落。为了简单起见,我们仅使用句号作为分隔符来拆分句子。我们将更复杂的句子拆分技术的讨论留在本节末尾的练习中。\n"
]
},
{
"cell_type": "code",
"execution_count": 2,
"id": "eb911790",
"metadata": {
"execution": {
"iopub.execute_input": "2023-08-18T07:00:41.118878Z",
"iopub.status.busy": "2023-08-18T07:00:41.118515Z",
"iopub.status.idle": "2023-08-18T07:00:41.124582Z",
"shell.execute_reply": "2023-08-18T07:00:41.123696Z"
},
"origin_pos": 5,
"tab": [
"pytorch"
]
},
"outputs": [],
"source": [
"#@save\n",
"d2l.DATA_HUB['wikitext-2'] = (\n",
" 'https://s3.amazonaws.com/research.metamind.io/wikitext/'\n",
" 'wikitext-2-v1.zip', '3c914d17d80b1459be871a5039ac23e752a53cbe')\n",
"\n",
"#@save\n",
"def _read_wiki(data_dir):\n",
" file_name = os.path.join(data_dir, 'wiki.train.tokens')\n",
" with open(file_name, 'r') as f:\n",
" lines = f.readlines()\n",
" # 大写字母转换为小写字母\n",
" paragraphs = [line.strip().lower().split(' . ')\n",
" for line in lines if len(line.split(' . ')) >= 2]\n",
" random.shuffle(paragraphs)\n",
" return paragraphs"
]
},
{
"cell_type": "markdown",
"id": "f2f5515b",
"metadata": {
"origin_pos": 6
},
"source": [
"## 为预训练任务定义辅助函数\n",
"\n",
"在下文中,我们首先为BERT的两个预训练任务实现辅助函数。这些辅助函数将在稍后将原始文本语料库转换为理想格式的数据集时调用,以预训练BERT。\n",
"\n",
"### 生成下一句预测任务的数据\n",
"\n",
"根据 :numref:`subsec_nsp`的描述,`_get_next_sentence`函数生成二分类任务的训练样本。\n"
]
},
{
"cell_type": "code",
"execution_count": 3,
"id": "246ca273",
"metadata": {
"execution": {
"iopub.execute_input": "2023-08-18T07:00:41.128645Z",
"iopub.status.busy": "2023-08-18T07:00:41.128375Z",
"iopub.status.idle": "2023-08-18T07:00:41.133471Z",
"shell.execute_reply": "2023-08-18T07:00:41.132347Z"
},
"origin_pos": 7,
"tab": [
"pytorch"
]
},
"outputs": [],
"source": [
"#@save\n",
"def _get_next_sentence(sentence, next_sentence, paragraphs):\n",
" if random.random() < 0.5:\n",
" is_next = True\n",
" else:\n",
" # paragraphs是三重列表的嵌套\n",
" next_sentence = random.choice(random.choice(paragraphs))\n",
" is_next = False\n",
" return sentence, next_sentence, is_next"
]
},
{
"cell_type": "markdown",
"id": "13b1d432",
"metadata": {
"origin_pos": 8
},
"source": [
"下面的函数通过调用`_get_next_sentence`函数从输入`paragraph`生成用于下一句预测的训练样本。这里`paragraph`是句子列表,其中每个句子都是词元列表。自变量`max_len`指定预训练期间的BERT输入序列的最大长度。\n"
]
},
{
"cell_type": "code",
"execution_count": 4,
"id": "a7686fde",
"metadata": {
"execution": {
"iopub.execute_input": "2023-08-18T07:00:41.137934Z",
"iopub.status.busy": "2023-08-18T07:00:41.137439Z",
"iopub.status.idle": "2023-08-18T07:00:41.143146Z",
"shell.execute_reply": "2023-08-18T07:00:41.142265Z"
},
"origin_pos": 9,
"tab": [
"pytorch"
]
},
"outputs": [],
"source": [
"#@save\n",
"def _get_nsp_data_from_paragraph(paragraph, paragraphs, vocab, max_len):\n",
" nsp_data_from_paragraph = []\n",
" for i in range(len(paragraph) - 1):\n",
" tokens_a, tokens_b, is_next = _get_next_sentence(\n",
" paragraph[i], paragraph[i + 1], paragraphs)\n",
" # 考虑1个'<cls>'词元和2个'<sep>'词元\n",
" if len(tokens_a) + len(tokens_b) + 3 > max_len:\n",
" continue\n",
" tokens, segments = d2l.get_tokens_and_segments(tokens_a, tokens_b)\n",
" nsp_data_from_paragraph.append((tokens, segments, is_next))\n",
" return nsp_data_from_paragraph"
]
},
{
"cell_type": "markdown",
"id": "86277b80",
"metadata": {
"origin_pos": 10
},
"source": [
"### 生成遮蔽语言模型任务的数据\n",
":label:`subsec_prepare_mlm_data`\n",
"\n",
"为了从BERT输入序列生成遮蔽语言模型的训练样本,我们定义了以下`_replace_mlm_tokens`函数。在其输入中,`tokens`是表示BERT输入序列的词元的列表,`candidate_pred_positions`是不包括特殊词元的BERT输入序列的词元索引的列表(特殊词元在遮蔽语言模型任务中不被预测),以及`num_mlm_preds`指示预测的数量(选择15%要预测的随机词元)。在 :numref:`subsec_mlm`中定义遮蔽语言模型任务之后,在每个预测位置,输入可以由特殊的“掩码”词元或随机词元替换,或者保持不变。最后,该函数返回可能替换后的输入词元、发生预测的词元索引和这些预测的标签。\n"
]
},
{
"cell_type": "code",
"execution_count": 5,
"id": "5e3de2c8",
"metadata": {
"execution": {
"iopub.execute_input": "2023-08-18T07:00:41.147428Z",
"iopub.status.busy": "2023-08-18T07:00:41.146946Z",
"iopub.status.idle": "2023-08-18T07:00:41.155481Z",
"shell.execute_reply": "2023-08-18T07:00:41.154569Z"
},
"origin_pos": 11,
"tab": [
"pytorch"
]
},
"outputs": [],
"source": [
"#@save\n",
"def _replace_mlm_tokens(tokens, candidate_pred_positions, num_mlm_preds,\n",
" vocab):\n",
" # 为遮蔽语言模型的输入创建新的词元副本,其中输入可能包含替换的“<mask>”或随机词元\n",
" mlm_input_tokens = [token for token in tokens]\n",
" pred_positions_and_labels = []\n",
" # 打乱后用于在遮蔽语言模型任务中获取15%的随机词元进行预测\n",
" random.shuffle(candidate_pred_positions)\n",
" for mlm_pred_position in candidate_pred_positions:\n",
" if len(pred_positions_and_labels) >= num_mlm_preds:\n",
" break\n",
" masked_token = None\n",
" # 80%的时间:将词替换为“<mask>”词元\n",
" if random.random() < 0.8:\n",
" masked_token = '<mask>'\n",
" else:\n",
" # 10%的时间:保持词不变\n",
" if random.random() < 0.5:\n",
" masked_token = tokens[mlm_pred_position]\n",
" # 10%的时间:用随机词替换该词\n",
" else:\n",
" masked_token = random.choice(vocab.idx_to_token)\n",
" mlm_input_tokens[mlm_pred_position] = masked_token\n",
" pred_positions_and_labels.append(\n",
" (mlm_pred_position, tokens[mlm_pred_position]))\n",
" return mlm_input_tokens, pred_positions_and_labels"
]
},
{
"cell_type": "markdown",
"id": "81ce2383",
"metadata": {
"origin_pos": 12
},
"source": [
"通过调用前述的`_replace_mlm_tokens`函数,以下函数将BERT输入序列(`tokens`)作为输入,并返回输入词元的索引(在 :numref:`subsec_mlm`中描述的可能的词元替换之后)、发生预测的词元索引以及这些预测的标签索引。\n"
]
},
{
"cell_type": "code",
"execution_count": 6,
"id": "841a4650",
"metadata": {
"execution": {
"iopub.execute_input": "2023-08-18T07:00:41.160061Z",
"iopub.status.busy": "2023-08-18T07:00:41.159300Z",
"iopub.status.idle": "2023-08-18T07:00:41.165820Z",
"shell.execute_reply": "2023-08-18T07:00:41.164855Z"
},
"origin_pos": 13,
"tab": [
"pytorch"
]
},
"outputs": [],
"source": [
"#@save\n",
"def _get_mlm_data_from_tokens(tokens, vocab):\n",
" candidate_pred_positions = []\n",
" # tokens是一个字符串列表\n",
" for i, token in enumerate(tokens):\n",
" # 在遮蔽语言模型任务中不会预测特殊词元\n",
" if token in ['<cls>', '<sep>']:\n",
" continue\n",
" candidate_pred_positions.append(i)\n",
" # 遮蔽语言模型任务中预测15%的随机词元\n",
" num_mlm_preds = max(1, round(len(tokens) * 0.15))\n",
" mlm_input_tokens, pred_positions_and_labels = _replace_mlm_tokens(\n",
" tokens, candidate_pred_positions, num_mlm_preds, vocab)\n",
" pred_positions_and_labels = sorted(pred_positions_and_labels,\n",
" key=lambda x: x[0])\n",
" pred_positions = [v[0] for v in pred_positions_and_labels]\n",
" mlm_pred_labels = [v[1] for v in pred_positions_and_labels]\n",
" return vocab[mlm_input_tokens], pred_positions, vocab[mlm_pred_labels]"
]
},
{
"cell_type": "markdown",
"id": "396550b1",
"metadata": {
"origin_pos": 14
},
"source": [
"## 将文本转换为预训练数据集\n",
"\n",
"现在我们几乎准备好为BERT预训练定制一个`Dataset`类。在此之前,我们仍然需要定义辅助函数`_pad_bert_inputs`来将特殊的“&lt;mask&gt;”词元附加到输入。它的参数`examples`包含来自两个预训练任务的辅助函数`_get_nsp_data_from_paragraph`和`_get_mlm_data_from_tokens`的输出。\n"
]
},
{
"cell_type": "code",
"execution_count": 7,
"id": "6552099b",
"metadata": {
"execution": {
"iopub.execute_input": "2023-08-18T07:00:41.170203Z",
"iopub.status.busy": "2023-08-18T07:00:41.169578Z",
"iopub.status.idle": "2023-08-18T07:00:41.180126Z",
"shell.execute_reply": "2023-08-18T07:00:41.179219Z"
},
"origin_pos": 16,
"tab": [
"pytorch"
]
},
"outputs": [],
"source": [
"#@save\n",
"def _pad_bert_inputs(examples, max_len, vocab):\n",
" max_num_mlm_preds = round(max_len * 0.15)\n",
" all_token_ids, all_segments, valid_lens, = [], [], []\n",
" all_pred_positions, all_mlm_weights, all_mlm_labels = [], [], []\n",
" nsp_labels = []\n",
" for (token_ids, pred_positions, mlm_pred_label_ids, segments,\n",
" is_next) in examples:\n",
" all_token_ids.append(torch.tensor(token_ids + [vocab['<pad>']] * (\n",
" max_len - len(token_ids)), dtype=torch.long))\n",
" all_segments.append(torch.tensor(segments + [0] * (\n",
" max_len - len(segments)), dtype=torch.long))\n",
" # valid_lens不包括'<pad>'的计数\n",
" valid_lens.append(torch.tensor(len(token_ids), dtype=torch.float32))\n",
" all_pred_positions.append(torch.tensor(pred_positions + [0] * (\n",
" max_num_mlm_preds - len(pred_positions)), dtype=torch.long))\n",
" # 填充词元的预测将通过乘以0权重在损失中过滤掉\n",
" all_mlm_weights.append(\n",
" torch.tensor([1.0] * len(mlm_pred_label_ids) + [0.0] * (\n",
" max_num_mlm_preds - len(pred_positions)),\n",
" dtype=torch.float32))\n",
" all_mlm_labels.append(torch.tensor(mlm_pred_label_ids + [0] * (\n",
" max_num_mlm_preds - len(mlm_pred_label_ids)), dtype=torch.long))\n",
" nsp_labels.append(torch.tensor(is_next, dtype=torch.long))\n",
" return (all_token_ids, all_segments, valid_lens, all_pred_positions,\n",
" all_mlm_weights, all_mlm_labels, nsp_labels)"
]
},
{
"cell_type": "markdown",
"id": "d4e8a88c",
"metadata": {
"origin_pos": 18
},
"source": [
"将用于生成两个预训练任务的训练样本的辅助函数和用于填充输入的辅助函数放在一起,我们定义以下`_WikiTextDataset`类为用于预训练BERT的WikiText-2数据集。通过实现`__getitem__ `函数,我们可以任意访问WikiText-2语料库的一对句子生成的预训练样本(遮蔽语言模型和下一句预测)样本。\n",
"\n",
"最初的BERT模型使用词表大小为30000的WordPiece嵌入 :cite:`Wu.Schuster.Chen.ea.2016`。WordPiece的词元化方法是对 :numref:`subsec_Byte_Pair_Encoding`中原有的字节对编码算法稍作修改。为简单起见,我们使用`d2l.tokenize`函数进行词元化。出现次数少于5次的不频繁词元将被过滤掉。\n"
]
},
{
"cell_type": "code",
"execution_count": 8,
"id": "c4d049c9",
"metadata": {
"execution": {
"iopub.execute_input": "2023-08-18T07:00:41.184551Z",
"iopub.status.busy": "2023-08-18T07:00:41.183947Z",
"iopub.status.idle": "2023-08-18T07:00:41.192539Z",
"shell.execute_reply": "2023-08-18T07:00:41.191426Z"
},
"origin_pos": 20,
"tab": [
"pytorch"
]
},
"outputs": [],
"source": [
"#@save\n",
"class _WikiTextDataset(torch.utils.data.Dataset):\n",
" def __init__(self, paragraphs, max_len):\n",
" # 输入paragraphs[i]是代表段落的句子字符串列表;\n",
" # 而输出paragraphs[i]是代表段落的句子列表,其中每个句子都是词元列表\n",
" paragraphs = [d2l.tokenize(\n",
" paragraph, token='word') for paragraph in paragraphs]\n",
" sentences = [sentence for paragraph in paragraphs\n",
" for sentence in paragraph]\n",
" self.vocab = d2l.Vocab(sentences, min_freq=5, reserved_tokens=[\n",
" '<pad>', '<mask>', '<cls>', '<sep>'])\n",
" # 获取下一句子预测任务的数据\n",
" examples = []\n",
" for paragraph in paragraphs:\n",
" examples.extend(_get_nsp_data_from_paragraph(\n",
" paragraph, paragraphs, self.vocab, max_len))\n",
" # 获取遮蔽语言模型任务的数据\n",
" examples = [(_get_mlm_data_from_tokens(tokens, self.vocab)\n",
" + (segments, is_next))\n",
" for tokens, segments, is_next in examples]\n",
" # 填充输入\n",
" (self.all_token_ids, self.all_segments, self.valid_lens,\n",
" self.all_pred_positions, self.all_mlm_weights,\n",
" self.all_mlm_labels, self.nsp_labels) = _pad_bert_inputs(\n",
" examples, max_len, self.vocab)\n",
"\n",
" def __getitem__(self, idx):\n",
" return (self.all_token_ids[idx], self.all_segments[idx],\n",
" self.valid_lens[idx], self.all_pred_positions[idx],\n",
" self.all_mlm_weights[idx], self.all_mlm_labels[idx],\n",
" self.nsp_labels[idx])\n",
"\n",
" def __len__(self):\n",
" return len(self.all_token_ids)"
]
},
{
"cell_type": "markdown",
"id": "0ede31c0",
"metadata": {
"origin_pos": 22
},
"source": [
"通过使用`_read_wiki`函数和`_WikiTextDataset`类,我们定义了下面的`load_data_wiki`来下载并生成WikiText-2数据集,并从中生成预训练样本。\n"
]
},
{
"cell_type": "code",
"execution_count": 9,
"id": "9b484a88",
"metadata": {
"execution": {
"iopub.execute_input": "2023-08-18T07:00:41.197261Z",
"iopub.status.busy": "2023-08-18T07:00:41.196591Z",
"iopub.status.idle": "2023-08-18T07:00:41.202074Z",
"shell.execute_reply": "2023-08-18T07:00:41.201154Z"
},
"origin_pos": 24,
"tab": [
"pytorch"
]
},
"outputs": [],
"source": [
"#@save\n",
"def load_data_wiki(batch_size, max_len):\n",
" \"\"\"加载WikiText-2数据集\"\"\"\n",
" num_workers = d2l.get_dataloader_workers()\n",
" data_dir = d2l.download_extract('wikitext-2', 'wikitext-2')\n",
" paragraphs = _read_wiki(data_dir)\n",
" train_set = _WikiTextDataset(paragraphs, max_len)\n",
" train_iter = torch.utils.data.DataLoader(train_set, batch_size,\n",
" shuffle=True, num_workers=num_workers)\n",
" return train_iter, train_set.vocab"
]
},
{
"cell_type": "markdown",
"id": "74b59eb9",
"metadata": {
"origin_pos": 26
},
"source": [
"将批量大小设置为512,将BERT输入序列的最大长度设置为64,我们打印出小批量的BERT预训练样本的形状。注意,在每个BERT输入序列中,为遮蔽语言模型任务预测$10$($64 \\times 0.15$)个位置。\n"
]
},
{
"cell_type": "code",
"execution_count": 10,
"id": "f1a8e103",
"metadata": {
"execution": {
"iopub.execute_input": "2023-08-18T07:00:41.206083Z",
"iopub.status.busy": "2023-08-18T07:00:41.205815Z",
"iopub.status.idle": "2023-08-18T07:00:52.152614Z",
"shell.execute_reply": "2023-08-18T07:00:52.151321Z"
},
"origin_pos": 27,
"tab": [
"pytorch"
]
},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"Downloading ../data/wikitext-2-v1.zip from https://s3.amazonaws.com/research.metamind.io/wikitext/wikitext-2-v1.zip...\n"
]
},
{
"name": "stdout",
"output_type": "stream",
"text": [
"torch.Size([512, 64]) torch.Size([512, 64]) torch.Size([512]) torch.Size([512, 10]) torch.Size([512, 10]) torch.Size([512, 10]) torch.Size([512])\n"
]
}
],
"source": [
"batch_size, max_len = 512, 64\n",
"train_iter, vocab = load_data_wiki(batch_size, max_len)\n",
"\n",
"for (tokens_X, segments_X, valid_lens_x, pred_positions_X, mlm_weights_X,\n",
" mlm_Y, nsp_y) in train_iter:\n",
" print(tokens_X.shape, segments_X.shape, valid_lens_x.shape,\n",
" pred_positions_X.shape, mlm_weights_X.shape, mlm_Y.shape,\n",
" nsp_y.shape)\n",
" break"
]
},
{
"cell_type": "markdown",
"id": "c8b78dd7",
"metadata": {
"origin_pos": 28
},
"source": [
"最后,我们来看一下词量。即使在过滤掉不频繁的词元之后,它仍然比PTB数据集的大两倍以上。\n"
]
},
{
"cell_type": "code",
"execution_count": 11,
"id": "47b86684",
"metadata": {
"execution": {
"iopub.execute_input": "2023-08-18T07:00:52.159404Z",
"iopub.status.busy": "2023-08-18T07:00:52.158958Z",
"iopub.status.idle": "2023-08-18T07:00:52.169643Z",
"shell.execute_reply": "2023-08-18T07:00:52.168438Z"
},
"origin_pos": 29,
"tab": [
"pytorch"
]
},
"outputs": [
{
"data": {
"text/plain": [
"20256"
]
},
"execution_count": 11,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"len(vocab)"
]
},
{
"cell_type": "markdown",
"id": "081adbe2",
"metadata": {
"origin_pos": 30
},
"source": [
"## 小结\n",
"\n",
"* 与PTB数据集相比,WikiText-2数据集保留了原来的标点符号、大小写和数字,并且比PTB数据集大了两倍多。\n",
"* 我们可以任意访问从WikiText-2语料库中的一对句子生成的预训练(遮蔽语言模型和下一句预测)样本。\n",
"\n",
"## 练习\n",
"\n",
"1. 为简单起见,句号用作拆分句子的唯一分隔符。尝试其他的句子拆分技术,比如Spacy和NLTK。以NLTK为例,需要先安装NLTK`pip install nltk`。在代码中先`import nltk`。然后下载Punkt语句词元分析器:`nltk.download('punkt')`。要拆分句子,比如`sentences = 'This is great ! Why not ?'`,调用`nltk.tokenize.sent_tokenize(sentences)`将返回两个句子字符串的列表:`['This is great !', 'Why not ?']`。\n",
"1. 如果我们不过滤出一些不常见的词元,词量会有多大?\n"
]
},
{
"cell_type": "markdown",
"id": "cebcf3ae",
"metadata": {
"origin_pos": 32,
"tab": [
"pytorch"
]
},
"source": [
"[Discussions](https://discuss.d2l.ai/t/5738)\n"
]
}
],
"metadata": {
"language_info": {
"name": "python"
},
"required_libs": []
},
"nbformat": 4,
"nbformat_minor": 5
}

Some files were not shown because too many files have changed in this diff Show More