{
"cells": [
{
"cell_type": "markdown",
"id": "ded8d82e",
"metadata": {
"origin_pos": 0
},
"source": [
"# 残差网络(ResNet)\n",
":label:`sec_resnet`\n",
"\n",
"随着我们设计越来越深的网络,深刻理解“新添加的层如何提升神经网络的性能”变得至关重要。更重要的是设计网络的能力,在这种网络中,添加层会使网络更具表现力,\n",
"为了取得质的突破,我们需要一些数学基础知识。\n",
"\n",
"## 函数类\n",
"\n",
"首先,假设有一类特定的神经网络架构$\\mathcal{F}$,它包括学习速率和其他超参数设置。\n",
"对于所有$f \\in \\mathcal{F}$,存在一些参数集(例如权重和偏置),这些参数可以通过在合适的数据集上进行训练而获得。\n",
"现在假设$f^*$是我们真正想要找到的函数,如果是$f^* \\in \\mathcal{F}$,那我们可以轻而易举的训练得到它,但通常我们不会那么幸运。\n",
"相反,我们将尝试找到一个函数$f^*_\\mathcal{F}$,这是我们在$\\mathcal{F}$中的最佳选择。\n",
"例如,给定一个具有$\\mathbf{X}$特性和$\\mathbf{y}$标签的数据集,我们可以尝试通过解决以下优化问题来找到它:\n",
"\n",
"$$f^*_\\mathcal{F} := \\mathop{\\mathrm{argmin}}_f L(\\mathbf{X}, \\mathbf{y}, f) \\text{ subject to } f \\in \\mathcal{F}.$$\n",
"\n",
"那么,怎样得到更近似真正$f^*$的函数呢?\n",
"唯一合理的可能性是,我们需要设计一个更强大的架构$\\mathcal{F}'$。\n",
"换句话说,我们预计$f^*_{\\mathcal{F}'}$比$f^*_{\\mathcal{F}}$“更近似”。\n",
"然而,如果$\\mathcal{F} \\not\\subseteq \\mathcal{F}'$,则无法保证新的体系“更近似”。\n",
"事实上,$f^*_{\\mathcal{F}'}$可能更糟:\n",
"如 :numref:`fig_functionclasses`所示,对于非嵌套函数(non-nested function)类,较复杂的函数类并不总是向“真”函数$f^*$靠拢(复杂度由$\\mathcal{F}_1$向$\\mathcal{F}_6$递增)。\n",
"在 :numref:`fig_functionclasses`的左边,虽然$\\mathcal{F}_3$比$\\mathcal{F}_1$更接近$f^*$,但$\\mathcal{F}_6$却离的更远了。\n",
"相反对于 :numref:`fig_functionclasses`右侧的嵌套函数(nested function)类$\\mathcal{F}_1 \\subseteq \\ldots \\subseteq \\mathcal{F}_6$,我们可以避免上述问题。\n",
"\n",
"\n",
":label:`fig_functionclasses`\n",
"\n",
"因此,只有当较复杂的函数类包含较小的函数类时,我们才能确保提高它们的性能。\n",
"对于深度神经网络,如果我们能将新添加的层训练成*恒等映射*(identity function)$f(\\mathbf{x}) = \\mathbf{x}$,新模型和原模型将同样有效。\n",
"同时,由于新模型可能得出更优的解来拟合训练数据集,因此添加层似乎更容易降低训练误差。\n",
"\n",
"针对这一问题,何恺明等人提出了*残差网络*(ResNet) :cite:`He.Zhang.Ren.ea.2016`。\n",
"它在2015年的ImageNet图像识别挑战赛夺魁,并深刻影响了后来的深度神经网络的设计。\n",
"残差网络的核心思想是:每个附加层都应该更容易地包含原始函数作为其元素之一。\n",
"于是,*残差块*(residual blocks)便诞生了,这个设计对如何建立深层神经网络产生了深远的影响。\n",
"凭借它,ResNet赢得了2015年ImageNet大规模视觉识别挑战赛。\n",
"\n",
"## (**残差块**)\n",
"\n",
"让我们聚焦于神经网络局部:如图 :numref:`fig_residual_block`所示,假设我们的原始输入为$x$,而希望学出的理想映射为$f(\\mathbf{x})$(作为 :numref:`fig_residual_block`上方激活函数的输入)。\n",
" :numref:`fig_residual_block`左图虚线框中的部分需要直接拟合出该映射$f(\\mathbf{x})$,而右图虚线框中的部分则需要拟合出残差映射$f(\\mathbf{x}) - \\mathbf{x}$。\n",
"残差映射在现实中往往更容易优化。\n",
"以本节开头提到的恒等映射作为我们希望学出的理想映射$f(\\mathbf{x})$,我们只需将 :numref:`fig_residual_block`中右图虚线框内上方的加权运算(如仿射)的权重和偏置参数设成0,那么$f(\\mathbf{x})$即为恒等映射。\n",
"实际中,当理想映射$f(\\mathbf{x})$极接近于恒等映射时,残差映射也易于捕捉恒等映射的细微波动。\n",
" :numref:`fig_residual_block`右图是ResNet的基础架构--*残差块*(residual block)。\n",
"在残差块中,输入可通过跨层数据线路更快地向前传播。\n",
"\n",
"\n",
":label:`fig_residual_block`\n",
"\n",
"ResNet沿用了VGG完整的$3\\times 3$卷积层设计。\n",
"残差块里首先有2个有相同输出通道数的$3\\times 3$卷积层。\n",
"每个卷积层后接一个批量规范化层和ReLU激活函数。\n",
"然后我们通过跨层数据通路,跳过这2个卷积运算,将输入直接加在最后的ReLU激活函数前。\n",
"这样的设计要求2个卷积层的输出与输入形状一样,从而使它们可以相加。\n",
"如果想改变通道数,就需要引入一个额外的$1\\times 1$卷积层来将输入变换成需要的形状后再做相加运算。\n",
"残差块的实现如下:\n"
]
},
{
"cell_type": "code",
"execution_count": 1,
"id": "de076347",
"metadata": {
"execution": {
"iopub.execute_input": "2023-08-18T07:23:09.985121Z",
"iopub.status.busy": "2023-08-18T07:23:09.984259Z",
"iopub.status.idle": "2023-08-18T07:23:13.061925Z",
"shell.execute_reply": "2023-08-18T07:23:13.061035Z"
},
"origin_pos": 2,
"tab": [
"pytorch"
]
},
"outputs": [],
"source": [
"import torch\n",
"from torch import nn\n",
"from torch.nn import functional as F\n",
"from d2l import torch as d2l\n",
"\n",
"\n",
"class Residual(nn.Module): #@save\n",
" def __init__(self, input_channels, num_channels,\n",
" use_1x1conv=False, strides=1):\n",
" super().__init__()\n",
" self.conv1 = nn.Conv2d(input_channels, num_channels,\n",
" kernel_size=3, padding=1, stride=strides)\n",
" self.conv2 = nn.Conv2d(num_channels, num_channels,\n",
" kernel_size=3, padding=1)\n",
" if use_1x1conv:\n",
" self.conv3 = nn.Conv2d(input_channels, num_channels,\n",
" kernel_size=1, stride=strides)\n",
" else:\n",
" self.conv3 = None\n",
" self.bn1 = nn.BatchNorm2d(num_channels)\n",
" self.bn2 = nn.BatchNorm2d(num_channels)\n",
"\n",
" def forward(self, X):\n",
" Y = F.relu(self.bn1(self.conv1(X)))\n",
" Y = self.bn2(self.conv2(Y))\n",
" if self.conv3:\n",
" X = self.conv3(X)\n",
" Y += X\n",
" return F.relu(Y)"
]
},
{
"cell_type": "markdown",
"id": "800b1b46",
"metadata": {
"origin_pos": 5
},
"source": [
"如 :numref:`fig_resnet_block`所示,此代码生成两种类型的网络:\n",
"一种是当`use_1x1conv=False`时,应用ReLU非线性函数之前,将输入添加到输出。\n",
"另一种是当`use_1x1conv=True`时,添加通过$1 \\times 1$卷积调整通道和分辨率。\n",
"\n",
"\n",
":label:`fig_resnet_block`\n",
"\n",
"下面我们来查看[**输入和输出形状一致**]的情况。\n"
]
},
{
"cell_type": "code",
"execution_count": 2,
"id": "af9ca1b9",
"metadata": {
"execution": {
"iopub.execute_input": "2023-08-18T07:23:13.066634Z",
"iopub.status.busy": "2023-08-18T07:23:13.065953Z",
"iopub.status.idle": "2023-08-18T07:23:13.103556Z",
"shell.execute_reply": "2023-08-18T07:23:13.102121Z"
},
"origin_pos": 7,
"tab": [
"pytorch"
]
},
"outputs": [
{
"data": {
"text/plain": [
"torch.Size([4, 3, 6, 6])"
]
},
"execution_count": 2,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"blk = Residual(3,3)\n",
"X = torch.rand(4, 3, 6, 6)\n",
"Y = blk(X)\n",
"Y.shape"
]
},
{
"cell_type": "markdown",
"id": "419b9a4a",
"metadata": {
"origin_pos": 10
},
"source": [
"我们也可以在[**增加输出通道数的同时,减半输出的高和宽**]。\n"
]
},
{
"cell_type": "code",
"execution_count": 3,
"id": "e9a01bd0",
"metadata": {
"execution": {
"iopub.execute_input": "2023-08-18T07:23:13.108447Z",
"iopub.status.busy": "2023-08-18T07:23:13.107641Z",
"iopub.status.idle": "2023-08-18T07:23:13.127450Z",
"shell.execute_reply": "2023-08-18T07:23:13.126006Z"
},
"origin_pos": 12,
"tab": [
"pytorch"
]
},
"outputs": [
{
"data": {
"text/plain": [
"torch.Size([4, 6, 3, 3])"
]
},
"execution_count": 3,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"blk = Residual(3,6, use_1x1conv=True, strides=2)\n",
"blk(X).shape"
]
},
{
"cell_type": "markdown",
"id": "a77a7e4f",
"metadata": {
"origin_pos": 15
},
"source": [
"## [**ResNet模型**]\n",
"\n",
"ResNet的前两层跟之前介绍的GoogLeNet中的一样:\n",
"在输出通道数为64、步幅为2的$7 \\times 7$卷积层后,接步幅为2的$3 \\times 3$的最大汇聚层。\n",
"不同之处在于ResNet每个卷积层后增加了批量规范化层。\n"
]
},
{
"cell_type": "code",
"execution_count": 4,
"id": "e4fe2ed6",
"metadata": {
"execution": {
"iopub.execute_input": "2023-08-18T07:23:13.134088Z",
"iopub.status.busy": "2023-08-18T07:23:13.133092Z",
"iopub.status.idle": "2023-08-18T07:23:13.141355Z",
"shell.execute_reply": "2023-08-18T07:23:13.140086Z"
},
"origin_pos": 17,
"tab": [
"pytorch"
]
},
"outputs": [],
"source": [
"b1 = nn.Sequential(nn.Conv2d(1, 64, kernel_size=7, stride=2, padding=3),\n",
" nn.BatchNorm2d(64), nn.ReLU(),\n",
" nn.MaxPool2d(kernel_size=3, stride=2, padding=1))"
]
},
{
"cell_type": "markdown",
"id": "4c0dcc5c",
"metadata": {
"origin_pos": 20
},
"source": [
"GoogLeNet在后面接了4个由Inception块组成的模块。\n",
"ResNet则使用4个由残差块组成的模块,每个模块使用若干个同样输出通道数的残差块。\n",
"第一个模块的通道数同输入通道数一致。\n",
"由于之前已经使用了步幅为2的最大汇聚层,所以无须减小高和宽。\n",
"之后的每个模块在第一个残差块里将上一个模块的通道数翻倍,并将高和宽减半。\n",
"\n",
"下面我们来实现这个模块。注意,我们对第一个模块做了特别处理。\n"
]
},
{
"cell_type": "code",
"execution_count": 5,
"id": "748cfd51",
"metadata": {
"execution": {
"iopub.execute_input": "2023-08-18T07:23:13.146374Z",
"iopub.status.busy": "2023-08-18T07:23:13.145731Z",
"iopub.status.idle": "2023-08-18T07:23:13.152040Z",
"shell.execute_reply": "2023-08-18T07:23:13.150742Z"
},
"origin_pos": 22,
"tab": [
"pytorch"
]
},
"outputs": [],
"source": [
"def resnet_block(input_channels, num_channels, num_residuals,\n",
" first_block=False):\n",
" blk = []\n",
" for i in range(num_residuals):\n",
" if i == 0 and not first_block:\n",
" blk.append(Residual(input_channels, num_channels,\n",
" use_1x1conv=True, strides=2))\n",
" else:\n",
" blk.append(Residual(num_channels, num_channels))\n",
" return blk"
]
},
{
"cell_type": "markdown",
"id": "3351bfea",
"metadata": {
"origin_pos": 25
},
"source": [
"接着在ResNet加入所有残差块,这里每个模块使用2个残差块。\n"
]
},
{
"cell_type": "code",
"execution_count": 6,
"id": "cbb6978f",
"metadata": {
"execution": {
"iopub.execute_input": "2023-08-18T07:23:13.157627Z",
"iopub.status.busy": "2023-08-18T07:23:13.156822Z",
"iopub.status.idle": "2023-08-18T07:23:13.350496Z",
"shell.execute_reply": "2023-08-18T07:23:13.349272Z"
},
"origin_pos": 27,
"tab": [
"pytorch"
]
},
"outputs": [],
"source": [
"b2 = nn.Sequential(*resnet_block(64, 64, 2, first_block=True))\n",
"b3 = nn.Sequential(*resnet_block(64, 128, 2))\n",
"b4 = nn.Sequential(*resnet_block(128, 256, 2))\n",
"b5 = nn.Sequential(*resnet_block(256, 512, 2))"
]
},
{
"cell_type": "markdown",
"id": "badd44e1",
"metadata": {
"origin_pos": 29
},
"source": [
"最后,与GoogLeNet一样,在ResNet中加入全局平均汇聚层,以及全连接层输出。\n"
]
},
{
"cell_type": "code",
"execution_count": 7,
"id": "2e587937",
"metadata": {
"execution": {
"iopub.execute_input": "2023-08-18T07:23:13.355546Z",
"iopub.status.busy": "2023-08-18T07:23:13.354729Z",
"iopub.status.idle": "2023-08-18T07:23:13.361543Z",
"shell.execute_reply": "2023-08-18T07:23:13.360406Z"
},
"origin_pos": 31,
"tab": [
"pytorch"
]
},
"outputs": [],
"source": [
"net = nn.Sequential(b1, b2, b3, b4, b5,\n",
" nn.AdaptiveAvgPool2d((1,1)),\n",
" nn.Flatten(), nn.Linear(512, 10))"
]
},
{
"cell_type": "markdown",
"id": "62731b56",
"metadata": {
"origin_pos": 34
},
"source": [
"每个模块有4个卷积层(不包括恒等映射的$1\\times 1$卷积层)。\n",
"加上第一个$7\\times 7$卷积层和最后一个全连接层,共有18层。\n",
"因此,这种模型通常被称为ResNet-18。\n",
"通过配置不同的通道数和模块里的残差块数可以得到不同的ResNet模型,例如更深的含152层的ResNet-152。\n",
"虽然ResNet的主体架构跟GoogLeNet类似,但ResNet架构更简单,修改也更方便。这些因素都导致了ResNet迅速被广泛使用。\n",
" :numref:`fig_resnet18`描述了完整的ResNet-18。\n",
"\n",
"\n",
":label:`fig_resnet18`\n",
"\n",
"在训练ResNet之前,让我们[**观察一下ResNet中不同模块的输入形状是如何变化的**]。\n",
"在之前所有架构中,分辨率降低,通道数量增加,直到全局平均汇聚层聚集所有特征。\n"
]
},
{
"cell_type": "code",
"execution_count": 8,
"id": "3ea90646",
"metadata": {
"execution": {
"iopub.execute_input": "2023-08-18T07:23:13.365946Z",
"iopub.status.busy": "2023-08-18T07:23:13.365075Z",
"iopub.status.idle": "2023-08-18T07:23:13.416010Z",
"shell.execute_reply": "2023-08-18T07:23:13.414636Z"
},
"origin_pos": 36,
"tab": [
"pytorch"
]
},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"Sequential output shape:\t torch.Size([1, 64, 56, 56])\n",
"Sequential output shape:\t torch.Size([1, 64, 56, 56])\n",
"Sequential output shape:\t torch.Size([1, 128, 28, 28])\n",
"Sequential output shape:\t torch.Size([1, 256, 14, 14])\n",
"Sequential output shape:\t torch.Size([1, 512, 7, 7])\n",
"AdaptiveAvgPool2d output shape:\t torch.Size([1, 512, 1, 1])\n",
"Flatten output shape:\t torch.Size([1, 512])\n",
"Linear output shape:\t torch.Size([1, 10])\n"
]
}
],
"source": [
"X = torch.rand(size=(1, 1, 224, 224))\n",
"for layer in net:\n",
" X = layer(X)\n",
" print(layer.__class__.__name__,'output shape:\\t', X.shape)"
]
},
{
"cell_type": "markdown",
"id": "40bb0cca",
"metadata": {
"origin_pos": 39
},
"source": [
"## [**训练模型**]\n",
"\n",
"同之前一样,我们在Fashion-MNIST数据集上训练ResNet。\n"
]
},
{
"cell_type": "code",
"execution_count": 9,
"id": "e8e65fec",
"metadata": {
"execution": {
"iopub.execute_input": "2023-08-18T07:23:13.421685Z",
"iopub.status.busy": "2023-08-18T07:23:13.420709Z",
"iopub.status.idle": "2023-08-18T07:25:49.093828Z",
"shell.execute_reply": "2023-08-18T07:25:49.092826Z"
},
"origin_pos": 40,
"tab": [
"pytorch"
]
},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"loss 0.012, train acc 0.997, test acc 0.893\n",
"5032.7 examples/sec on cuda:0\n"
]
},
{
"data": {
"image/svg+xml": [
"\n",
"\n",
"\n"
],
"text/plain": [
""
]
},
"metadata": {
"needs_background": "light"
},
"output_type": "display_data"
}
],
"source": [
"lr, num_epochs, batch_size = 0.05, 10, 256\n",
"train_iter, test_iter = d2l.load_data_fashion_mnist(batch_size, resize=96)\n",
"d2l.train_ch6(net, train_iter, test_iter, num_epochs, lr, d2l.try_gpu())"
]
},
{
"cell_type": "markdown",
"id": "17f638fb",
"metadata": {
"origin_pos": 41
},
"source": [
"## 小结\n",
"\n",
"* 学习嵌套函数(nested function)是训练神经网络的理想情况。在深层神经网络中,学习另一层作为恒等映射(identity function)较容易(尽管这是一个极端情况)。\n",
"* 残差映射可以更容易地学习同一函数,例如将权重层中的参数近似为零。\n",
"* 利用残差块(residual blocks)可以训练出一个有效的深层神经网络:输入可以通过层间的残余连接更快地向前传播。\n",
"* 残差网络(ResNet)对随后的深层神经网络设计产生了深远影响。\n",
"\n",
"## 练习\n",
"\n",
"1. :numref:`fig_inception`中的Inception块与残差块之间的主要区别是什么?在删除了Inception块中的一些路径之后,它们是如何相互关联的?\n",
"1. 参考ResNet论文 :cite:`He.Zhang.Ren.ea.2016`中的表1,以实现不同的变体。\n",
"1. 对于更深层次的网络,ResNet引入了“bottleneck”架构来降低模型复杂性。请试着去实现它。\n",
"1. 在ResNet的后续版本中,作者将“卷积层、批量规范化层和激活层”架构更改为“批量规范化层、激活层和卷积层”架构。请尝试做这个改进。详见 :cite:`He.Zhang.Ren.ea.2016*1`中的图1。\n",
"1. 为什么即使函数类是嵌套的,我们仍然要限制增加函数的复杂性呢?\n"
]
},
{
"cell_type": "markdown",
"id": "8af86e79",
"metadata": {
"origin_pos": 43,
"tab": [
"pytorch"
]
},
"source": [
"[Discussions](https://discuss.d2l.ai/t/1877)\n"
]
}
],
"metadata": {
"language_info": {
"name": "python"
},
"required_libs": []
},
"nbformat": 4,
"nbformat_minor": 5
}