{ "cells": [ { "cell_type": "markdown", "id": "1dca9252", "metadata": { "origin_pos": 0 }, "source": [ "# 层和块\n", ":label:`sec_model_construction`\n", "\n", "之前首次介绍神经网络时,我们关注的是具有单一输出的线性模型。\n", "在这里,整个模型只有一个输出。\n", "注意,单个神经网络\n", "(1)接受一些输入;\n", "(2)生成相应的标量输出;\n", "(3)具有一组相关 *参数*(parameters),更新这些参数可以优化某目标函数。\n", "\n", "然后,当考虑具有多个输出的网络时,\n", "我们利用矢量化算法来描述整层神经元。\n", "像单个神经元一样,层(1)接受一组输入,\n", "(2)生成相应的输出,\n", "(3)由一组可调整参数描述。\n", "当我们使用softmax回归时,一个单层本身就是模型。\n", "然而,即使我们随后引入了多层感知机,我们仍然可以认为该模型保留了上面所说的基本架构。\n", "\n", "对于多层感知机而言,整个模型及其组成层都是这种架构。\n", "整个模型接受原始输入(特征),生成输出(预测),\n", "并包含一些参数(所有组成层的参数集合)。\n", "同样,每个单独的层接收输入(由前一层提供),\n", "生成输出(到下一层的输入),并且具有一组可调参数,\n", "这些参数根据从下一层反向传播的信号进行更新。\n", "\n", "事实证明,研究讨论“比单个层大”但“比整个模型小”的组件更有价值。\n", "例如,在计算机视觉中广泛流行的ResNet-152架构就有数百层,\n", "这些层是由*层组*(groups of layers)的重复模式组成。\n", "这个ResNet架构赢得了2015年ImageNet和COCO计算机视觉比赛\n", "的识别和检测任务 :cite:`He.Zhang.Ren.ea.2016`。\n", "目前ResNet架构仍然是许多视觉任务的首选架构。\n", "在其他的领域,如自然语言处理和语音,\n", "层组以各种重复模式排列的类似架构现在也是普遍存在。\n", "\n", "为了实现这些复杂的网络,我们引入了神经网络*块*的概念。\n", "*块*(block)可以描述单个层、由多个层组成的组件或整个模型本身。\n", "使用块进行抽象的一个好处是可以将一些块组合成更大的组件,\n", "这一过程通常是递归的,如 :numref:`fig_blocks`所示。\n", "通过定义代码来按需生成任意复杂度的块,\n", "我们可以通过简洁的代码实现复杂的神经网络。\n", "\n", "![多个层被组合成块,形成更大的模型](../img/blocks.svg)\n", ":label:`fig_blocks`\n", "\n", "从编程的角度来看,块由*类*(class)表示。\n", "它的任何子类都必须定义一个将其输入转换为输出的前向传播函数,\n", "并且必须存储任何必需的参数。\n", "注意,有些块不需要任何参数。\n", "最后,为了计算梯度,块必须具有反向传播函数。\n", "在定义我们自己的块时,由于自动微分(在 :numref:`sec_autograd` 中引入)\n", "提供了一些后端实现,我们只需要考虑前向传播函数和必需的参数。\n", "\n", "在构造自定义块之前,(**我们先回顾一下多层感知机**)\n", "( :numref:`sec_mlp_concise` )的代码。\n", "下面的代码生成一个网络,其中包含一个具有256个单元和ReLU激活函数的全连接隐藏层,\n", "然后是一个具有10个隐藏单元且不带激活函数的全连接输出层。\n" ] }, { "cell_type": "code", "execution_count": 1, "id": "9895e279", "metadata": { "execution": { "iopub.execute_input": "2023-08-18T06:57:00.244437Z", "iopub.status.busy": "2023-08-18T06:57:00.243813Z", "iopub.status.idle": "2023-08-18T06:57:01.320999Z", "shell.execute_reply": "2023-08-18T06:57:01.320186Z" }, "origin_pos": 2, "tab": [ "pytorch" ] }, "outputs": [ { "data": { "text/plain": [ "tensor([[ 0.0343, 0.0264, 0.2505, -0.0243, 0.0945, 0.0012, -0.0141, 0.0666,\n", " -0.0547, -0.0667],\n", " [ 0.0772, -0.0274, 0.2638, -0.0191, 0.0394, -0.0324, 0.0102, 0.0707,\n", " -0.1481, -0.1031]], grad_fn=)" ] }, "execution_count": 1, "metadata": {}, "output_type": "execute_result" } ], "source": [ "import torch\n", "from torch import nn\n", "from torch.nn import functional as F\n", "\n", "net = nn.Sequential(nn.Linear(20, 256), nn.ReLU(), nn.Linear(256, 10))\n", "\n", "X = torch.rand(2, 20)\n", "net(X)" ] }, { "cell_type": "markdown", "id": "be949c0e", "metadata": { "origin_pos": 6, "tab": [ "pytorch" ] }, "source": [ "在这个例子中,我们通过实例化`nn.Sequential`来构建我们的模型,\n", "层的执行顺序是作为参数传递的。\n", "简而言之,(**`nn.Sequential`定义了一种特殊的`Module`**),\n", "即在PyTorch中表示一个块的类,\n", "它维护了一个由`Module`组成的有序列表。\n", "注意,两个全连接层都是`Linear`类的实例,\n", "`Linear`类本身就是`Module`的子类。\n", "另外,到目前为止,我们一直在通过`net(X)`调用我们的模型来获得模型的输出。\n", "这实际上是`net.__call__(X)`的简写。\n", "这个前向传播函数非常简单:\n", "它将列表中的每个块连接在一起,将每个块的输出作为下一个块的输入。\n" ] }, { "cell_type": "markdown", "id": "a3ce5ce8", "metadata": { "origin_pos": 9 }, "source": [ "## [**自定义块**]\n", "\n", "要想直观地了解块是如何工作的,最简单的方法就是自己实现一个。\n", "在实现我们自定义块之前,我们简要总结一下每个块必须提供的基本功能。\n" ] }, { "cell_type": "markdown", "id": "24ea84f7", "metadata": { "origin_pos": 11, "tab": [ "pytorch" ] }, "source": [ "1. 将输入数据作为其前向传播函数的参数。\n", "1. 通过前向传播函数来生成输出。请注意,输出的形状可能与输入的形状不同。例如,我们上面模型中的第一个全连接的层接收一个20维的输入,但是返回一个维度为256的输出。\n", "1. 计算其输出关于输入的梯度,可通过其反向传播函数进行访问。通常这是自动发生的。\n", "1. 存储和访问前向传播计算所需的参数。\n", "1. 根据需要初始化模型参数。\n" ] }, { "cell_type": "markdown", "id": "572894df", "metadata": { "origin_pos": 12 }, "source": [ "在下面的代码片段中,我们从零开始编写一个块。\n", "它包含一个多层感知机,其具有256个隐藏单元的隐藏层和一个10维输出层。\n", "注意,下面的`MLP`类继承了表示块的类。\n", "我们的实现只需要提供我们自己的构造函数(Python中的`__init__`函数)和前向传播函数。\n" ] }, { "cell_type": "code", "execution_count": 2, "id": "876df867", "metadata": { "execution": { "iopub.execute_input": "2023-08-18T06:57:01.325541Z", "iopub.status.busy": "2023-08-18T06:57:01.324828Z", "iopub.status.idle": "2023-08-18T06:57:01.330411Z", "shell.execute_reply": "2023-08-18T06:57:01.329591Z" }, "origin_pos": 14, "tab": [ "pytorch" ] }, "outputs": [], "source": [ "class MLP(nn.Module):\n", " # 用模型参数声明层。这里,我们声明两个全连接的层\n", " def __init__(self):\n", " # 调用MLP的父类Module的构造函数来执行必要的初始化。\n", " # 这样,在类实例化时也可以指定其他函数参数,例如模型参数params(稍后将介绍)\n", " super().__init__()\n", " self.hidden = nn.Linear(20, 256) # 隐藏层\n", " self.out = nn.Linear(256, 10) # 输出层\n", "\n", " # 定义模型的前向传播,即如何根据输入X返回所需的模型输出\n", " def forward(self, X):\n", " # 注意,这里我们使用ReLU的函数版本,其在nn.functional模块中定义。\n", " return self.out(F.relu(self.hidden(X)))" ] }, { "cell_type": "markdown", "id": "8327a09c", "metadata": { "origin_pos": 17 }, "source": [ "我们首先看一下前向传播函数,它以`X`作为输入,\n", "计算带有激活函数的隐藏表示,并输出其未规范化的输出值。\n", "在这个`MLP`实现中,两个层都是实例变量。\n", "要了解这为什么是合理的,可以想象实例化两个多层感知机(`net1`和`net2`),\n", "并根据不同的数据对它们进行训练。\n", "当然,我们希望它们学到两种不同的模型。\n", "\n", "接着我们[**实例化多层感知机的层,然后在每次调用前向传播函数时调用这些层**]。\n", "注意一些关键细节:\n", "首先,我们定制的`__init__`函数通过`super().__init__()`\n", "调用父类的`__init__`函数,\n", "省去了重复编写模版代码的痛苦。\n", "然后,我们实例化两个全连接层,\n", "分别为`self.hidden`和`self.out`。\n", "注意,除非我们实现一个新的运算符,\n", "否则我们不必担心反向传播函数或参数初始化,\n", "系统将自动生成这些。\n", "\n", "我们来试一下这个函数:\n" ] }, { "cell_type": "code", "execution_count": 3, "id": "f7a34ec3", "metadata": { "execution": { "iopub.execute_input": "2023-08-18T06:57:01.334346Z", "iopub.status.busy": "2023-08-18T06:57:01.333603Z", "iopub.status.idle": "2023-08-18T06:57:01.340473Z", "shell.execute_reply": "2023-08-18T06:57:01.339676Z" }, "origin_pos": 19, "tab": [ "pytorch" ] }, "outputs": [ { "data": { "text/plain": [ "tensor([[ 0.0669, 0.2202, -0.0912, -0.0064, 0.1474, -0.0577, -0.3006, 0.1256,\n", " -0.0280, 0.4040],\n", " [ 0.0545, 0.2591, -0.0297, 0.1141, 0.1887, 0.0094, -0.2686, 0.0732,\n", " -0.0135, 0.3865]], grad_fn=)" ] }, "execution_count": 3, "metadata": {}, "output_type": "execute_result" } ], "source": [ "net = MLP()\n", "net(X)" ] }, { "cell_type": "markdown", "id": "37aaa7fc", "metadata": { "origin_pos": 21 }, "source": [ "块的一个主要优点是它的多功能性。\n", "我们可以子类化块以创建层(如全连接层的类)、\n", "整个模型(如上面的`MLP`类)或具有中等复杂度的各种组件。\n", "我们在接下来的章节中充分利用了这种多功能性,\n", "比如在处理卷积神经网络时。\n", "\n", "## [**顺序块**]\n", "\n", "现在我们可以更仔细地看看`Sequential`类是如何工作的,\n", "回想一下`Sequential`的设计是为了把其他模块串起来。\n", "为了构建我们自己的简化的`MySequential`,\n", "我们只需要定义两个关键函数:\n", "\n", "1. 一种将块逐个追加到列表中的函数;\n", "1. 一种前向传播函数,用于将输入按追加块的顺序传递给块组成的“链条”。\n", "\n", "下面的`MySequential`类提供了与默认`Sequential`类相同的功能。\n" ] }, { "cell_type": "code", "execution_count": 4, "id": "dd09709c", "metadata": { "execution": { "iopub.execute_input": "2023-08-18T06:57:01.344392Z", "iopub.status.busy": "2023-08-18T06:57:01.343695Z", "iopub.status.idle": "2023-08-18T06:57:01.349458Z", "shell.execute_reply": "2023-08-18T06:57:01.348481Z" }, "origin_pos": 23, "tab": [ "pytorch" ] }, "outputs": [], "source": [ "class MySequential(nn.Module):\n", " def __init__(self, *args):\n", " super().__init__()\n", " for idx, module in enumerate(args):\n", " # 这里,module是Module子类的一个实例。我们把它保存在'Module'类的成员\n", " # 变量_modules中。_module的类型是OrderedDict\n", " self._modules[str(idx)] = module\n", "\n", " def forward(self, X):\n", " # OrderedDict保证了按照成员添加的顺序遍历它们\n", " for block in self._modules.values():\n", " X = block(X)\n", " return X" ] }, { "cell_type": "markdown", "id": "2a44d091", "metadata": { "origin_pos": 27, "tab": [ "pytorch" ] }, "source": [ "`__init__`函数将每个模块逐个添加到有序字典`_modules`中。\n", "读者可能会好奇为什么每个`Module`都有一个`_modules`属性?\n", "以及为什么我们使用它而不是自己定义一个Python列表?\n", "简而言之,`_modules`的主要优点是:\n", "在模块的参数初始化过程中,\n", "系统知道在`_modules`字典中查找需要初始化参数的子块。\n" ] }, { "cell_type": "markdown", "id": "0272bce5", "metadata": { "origin_pos": 29 }, "source": [ "当`MySequential`的前向传播函数被调用时,\n", "每个添加的块都按照它们被添加的顺序执行。\n", "现在可以使用我们的`MySequential`类重新实现多层感知机。\n" ] }, { "cell_type": "code", "execution_count": 5, "id": "9672de9a", "metadata": { "execution": { "iopub.execute_input": "2023-08-18T06:57:01.353302Z", "iopub.status.busy": "2023-08-18T06:57:01.352727Z", "iopub.status.idle": "2023-08-18T06:57:01.360268Z", "shell.execute_reply": "2023-08-18T06:57:01.359462Z" }, "origin_pos": 31, "tab": [ "pytorch" ] }, "outputs": [ { "data": { "text/plain": [ "tensor([[ 2.2759e-01, -4.7003e-02, 4.2846e-01, -1.2546e-01, 1.5296e-01,\n", " 1.8972e-01, 9.7048e-02, 4.5479e-04, -3.7986e-02, 6.4842e-02],\n", " [ 2.7825e-01, -9.7517e-02, 4.8541e-01, -2.4519e-01, -8.4580e-02,\n", " 2.8538e-01, 3.6861e-02, 2.9411e-02, -1.0612e-01, 1.2620e-01]],\n", " grad_fn=)" ] }, "execution_count": 5, "metadata": {}, "output_type": "execute_result" } ], "source": [ "net = MySequential(nn.Linear(20, 256), nn.ReLU(), nn.Linear(256, 10))\n", "net(X)" ] }, { "cell_type": "markdown", "id": "189aa472", "metadata": { "origin_pos": 33 }, "source": [ "请注意,`MySequential`的用法与之前为`Sequential`类编写的代码相同\n", "(如 :numref:`sec_mlp_concise` 中所述)。\n", "\n", "## [**在前向传播函数中执行代码**]\n", "\n", "`Sequential`类使模型构造变得简单,\n", "允许我们组合新的架构,而不必定义自己的类。\n", "然而,并不是所有的架构都是简单的顺序架构。\n", "当需要更强的灵活性时,我们需要定义自己的块。\n", "例如,我们可能希望在前向传播函数中执行Python的控制流。\n", "此外,我们可能希望执行任意的数学运算,\n", "而不是简单地依赖预定义的神经网络层。\n", "\n", "到目前为止,\n", "我们网络中的所有操作都对网络的激活值及网络的参数起作用。\n", "然而,有时我们可能希望合并既不是上一层的结果也不是可更新参数的项,\n", "我们称之为*常数参数*(constant parameter)。\n", "例如,我们需要一个计算函数\n", "$f(\\mathbf{x},\\mathbf{w}) = c \\cdot \\mathbf{w}^\\top \\mathbf{x}$的层,\n", "其中$\\mathbf{x}$是输入,\n", "$\\mathbf{w}$是参数,\n", "$c$是某个在优化过程中没有更新的指定常量。\n", "因此我们实现了一个`FixedHiddenMLP`类,如下所示:\n" ] }, { "cell_type": "code", "execution_count": 6, "id": "9ad09596", "metadata": { "execution": { "iopub.execute_input": "2023-08-18T06:57:01.364000Z", "iopub.status.busy": "2023-08-18T06:57:01.363468Z", "iopub.status.idle": "2023-08-18T06:57:01.369665Z", "shell.execute_reply": "2023-08-18T06:57:01.368755Z" }, "origin_pos": 35, "tab": [ "pytorch" ] }, "outputs": [], "source": [ "class FixedHiddenMLP(nn.Module):\n", " def __init__(self):\n", " super().__init__()\n", " # 不计算梯度的随机权重参数。因此其在训练期间保持不变\n", " self.rand_weight = torch.rand((20, 20), requires_grad=False)\n", " self.linear = nn.Linear(20, 20)\n", "\n", " def forward(self, X):\n", " X = self.linear(X)\n", " # 使用创建的常量参数以及relu和mm函数\n", " X = F.relu(torch.mm(X, self.rand_weight) + 1)\n", " # 复用全连接层。这相当于两个全连接层共享参数\n", " X = self.linear(X)\n", " # 控制流\n", " while X.abs().sum() > 1:\n", " X /= 2\n", " return X.sum()" ] }, { "cell_type": "markdown", "id": "06017344", "metadata": { "origin_pos": 38 }, "source": [ "在这个`FixedHiddenMLP`模型中,我们实现了一个隐藏层,\n", "其权重(`self.rand_weight`)在实例化时被随机初始化,之后为常量。\n", "这个权重不是一个模型参数,因此它永远不会被反向传播更新。\n", "然后,神经网络将这个固定层的输出通过一个全连接层。\n", "\n", "注意,在返回输出之前,模型做了一些不寻常的事情:\n", "它运行了一个while循环,在$L_1$范数大于$1$的条件下,\n", "将输出向量除以$2$,直到它满足条件为止。\n", "最后,模型返回了`X`中所有项的和。\n", "注意,此操作可能不会常用于在任何实际任务中,\n", "我们只展示如何将任意代码集成到神经网络计算的流程中。\n" ] }, { "cell_type": "code", "execution_count": 7, "id": "00ebc567", "metadata": { "execution": { "iopub.execute_input": "2023-08-18T06:57:01.373508Z", "iopub.status.busy": "2023-08-18T06:57:01.372789Z", "iopub.status.idle": "2023-08-18T06:57:01.380049Z", "shell.execute_reply": "2023-08-18T06:57:01.379025Z" }, "origin_pos": 40, "tab": [ "pytorch" ] }, "outputs": [ { "data": { "text/plain": [ "tensor(0.1862, grad_fn=)" ] }, "execution_count": 7, "metadata": {}, "output_type": "execute_result" } ], "source": [ "net = FixedHiddenMLP()\n", "net(X)" ] }, { "cell_type": "markdown", "id": "80b18eb2", "metadata": { "origin_pos": 41 }, "source": [ "我们可以[**混合搭配各种组合块的方法**]。\n", "在下面的例子中,我们以一些想到的方法嵌套块。\n" ] }, { "cell_type": "code", "execution_count": 8, "id": "6ca3b399", "metadata": { "execution": { "iopub.execute_input": "2023-08-18T06:57:01.384091Z", "iopub.status.busy": "2023-08-18T06:57:01.383236Z", "iopub.status.idle": "2023-08-18T06:57:01.394649Z", "shell.execute_reply": "2023-08-18T06:57:01.393535Z" }, "origin_pos": 43, "tab": [ "pytorch" ] }, "outputs": [ { "data": { "text/plain": [ "tensor(0.2183, grad_fn=)" ] }, "execution_count": 8, "metadata": {}, "output_type": "execute_result" } ], "source": [ "class NestMLP(nn.Module):\n", " def __init__(self):\n", " super().__init__()\n", " self.net = nn.Sequential(nn.Linear(20, 64), nn.ReLU(),\n", " nn.Linear(64, 32), nn.ReLU())\n", " self.linear = nn.Linear(32, 16)\n", "\n", " def forward(self, X):\n", " return self.linear(self.net(X))\n", "\n", "chimera = nn.Sequential(NestMLP(), nn.Linear(16, 20), FixedHiddenMLP())\n", "chimera(X)" ] }, { "cell_type": "markdown", "id": "3b12e280", "metadata": { "origin_pos": 46 }, "source": [ "## 效率\n" ] }, { "cell_type": "markdown", "id": "e26229d3", "metadata": { "origin_pos": 48, "tab": [ "pytorch" ] }, "source": [ "读者可能会开始担心操作效率的问题。\n", "毕竟,我们在一个高性能的深度学习库中进行了大量的字典查找、\n", "代码执行和许多其他的Python代码。\n", "Python的问题[全局解释器锁](https://wiki.python.org/moin/GlobalInterpreterLock)\n", "是众所周知的。\n", "在深度学习环境中,我们担心速度极快的GPU可能要等到CPU运行Python代码后才能运行另一个作业。\n" ] }, { "cell_type": "markdown", "id": "4fa617e6", "metadata": { "origin_pos": 51 }, "source": [ "## 小结\n", "\n", "* 一个块可以由许多层组成;一个块可以由许多块组成。\n", "* 块可以包含代码。\n", "* 块负责大量的内部处理,包括参数初始化和反向传播。\n", "* 层和块的顺序连接由`Sequential`块处理。\n", "\n", "## 练习\n", "\n", "1. 如果将`MySequential`中存储块的方式更改为Python列表,会出现什么样的问题?\n", "1. 实现一个块,它以两个块为参数,例如`net1`和`net2`,并返回前向传播中两个网络的串联输出。这也被称为平行块。\n", "1. 假设我们想要连接同一网络的多个实例。实现一个函数,该函数生成同一个块的多个实例,并在此基础上构建更大的网络。\n" ] }, { "cell_type": "markdown", "id": "c29846c8", "metadata": { "origin_pos": 53, "tab": [ "pytorch" ] }, "source": [ "[Discussions](https://discuss.d2l.ai/t/1827)\n" ] } ], "metadata": { "language_info": { "name": "python" }, "required_libs": [] }, "nbformat": 4, "nbformat_minor": 5 }