{ "cells": [ { "cell_type": "markdown", "id": "b9f10712", "metadata": { "origin_pos": 0 }, "source": [ "# 含并行连结的网络(GoogLeNet)\n", ":label:`sec_googlenet`\n", "\n", "在2014年的ImageNet图像识别挑战赛中,一个名叫*GoogLeNet* :cite:`Szegedy.Liu.Jia.ea.2015`的网络架构大放异彩。\n", "GoogLeNet吸收了NiN中串联网络的思想,并在此基础上做了改进。\n", "这篇论文的一个重点是解决了什么样大小的卷积核最合适的问题。\n", "毕竟,以前流行的网络使用小到$1 \\times 1$,大到$11 \\times 11$的卷积核。\n", "本文的一个观点是,有时使用不同大小的卷积核组合是有利的。\n", "本节将介绍一个稍微简化的GoogLeNet版本:我们省略了一些为稳定训练而添加的特殊特性,现在有了更好的训练方法,这些特性不是必要的。\n", "\n", "## (**Inception块**)\n", "\n", "在GoogLeNet中,基本的卷积块被称为*Inception块*(Inception block)。这很可能得名于电影《盗梦空间》(Inception),因为电影中的一句话“我们需要走得更深”(“We need to go deeper”)。\n", "\n", "![Inception块的架构。](../img/inception.svg)\n", ":label:`fig_inception`\n", "\n", "如 :numref:`fig_inception`所示,Inception块由四条并行路径组成。\n", "前三条路径使用窗口大小为$1\\times 1$、$3\\times 3$和$5\\times 5$的卷积层,从不同空间大小中提取信息。\n", "中间的两条路径在输入上执行$1\\times 1$卷积,以减少通道数,从而降低模型的复杂性。\n", "第四条路径使用$3\\times 3$最大汇聚层,然后使用$1\\times 1$卷积层来改变通道数。\n", "这四条路径都使用合适的填充来使输入与输出的高和宽一致,最后我们将每条线路的输出在通道维度上连结,并构成Inception块的输出。在Inception块中,通常调整的超参数是每层输出通道数。\n" ] }, { "cell_type": "code", "execution_count": 1, "id": "82e98083", "metadata": { "execution": { "iopub.execute_input": "2023-08-18T07:18:45.698938Z", "iopub.status.busy": "2023-08-18T07:18:45.698048Z", "iopub.status.idle": "2023-08-18T07:18:49.844955Z", "shell.execute_reply": "2023-08-18T07:18:49.843595Z" }, "origin_pos": 2, "tab": [ "pytorch" ] }, "outputs": [], "source": [ "import torch\n", "from torch import nn\n", "from torch.nn import functional as F\n", "from d2l import torch as d2l\n", "\n", "\n", "class Inception(nn.Module):\n", " # c1--c4是每条路径的输出通道数\n", " def __init__(self, in_channels, c1, c2, c3, c4, **kwargs):\n", " super(Inception, self).__init__(**kwargs)\n", " # 线路1,单1x1卷积层\n", " self.p1_1 = nn.Conv2d(in_channels, c1, kernel_size=1)\n", " # 线路2,1x1卷积层后接3x3卷积层\n", " self.p2_1 = nn.Conv2d(in_channels, c2[0], kernel_size=1)\n", " self.p2_2 = nn.Conv2d(c2[0], c2[1], kernel_size=3, padding=1)\n", " # 线路3,1x1卷积层后接5x5卷积层\n", " self.p3_1 = nn.Conv2d(in_channels, c3[0], kernel_size=1)\n", " self.p3_2 = nn.Conv2d(c3[0], c3[1], kernel_size=5, padding=2)\n", " # 线路4,3x3最大汇聚层后接1x1卷积层\n", " self.p4_1 = nn.MaxPool2d(kernel_size=3, stride=1, padding=1)\n", " self.p4_2 = nn.Conv2d(in_channels, c4, kernel_size=1)\n", "\n", " def forward(self, x):\n", " p1 = F.relu(self.p1_1(x))\n", " p2 = F.relu(self.p2_2(F.relu(self.p2_1(x))))\n", " p3 = F.relu(self.p3_2(F.relu(self.p3_1(x))))\n", " p4 = F.relu(self.p4_2(self.p4_1(x)))\n", " # 在通道维度上连结输出\n", " return torch.cat((p1, p2, p3, p4), dim=1)" ] }, { "cell_type": "markdown", "id": "67ed1e2a", "metadata": { "origin_pos": 5 }, "source": [ "那么为什么GoogLeNet这个网络如此有效呢?\n", "首先我们考虑一下滤波器(filter)的组合,它们可以用各种滤波器尺寸探索图像,这意味着不同大小的滤波器可以有效地识别不同范围的图像细节。\n", "同时,我们可以为不同的滤波器分配不同数量的参数。\n", "\n", "## [**GoogLeNet模型**]\n", "\n", "如 :numref:`fig_inception_full`所示,GoogLeNet一共使用9个Inception块和全局平均汇聚层的堆叠来生成其估计值。Inception块之间的最大汇聚层可降低维度。\n", "第一个模块类似于AlexNet和LeNet,Inception块的组合从VGG继承,全局平均汇聚层避免了在最后使用全连接层。\n", "\n", "![GoogLeNet架构。](../img/inception-full.svg)\n", ":label:`fig_inception_full`\n", "\n", "现在,我们逐一实现GoogLeNet的每个模块。第一个模块使用64个通道、$7\\times 7$卷积层。\n" ] }, { "cell_type": "code", "execution_count": 2, "id": "4278ba0a", "metadata": { "execution": { "iopub.execute_input": "2023-08-18T07:18:49.850742Z", "iopub.status.busy": "2023-08-18T07:18:49.849860Z", "iopub.status.idle": "2023-08-18T07:18:49.877802Z", "shell.execute_reply": "2023-08-18T07:18:49.876517Z" }, "origin_pos": 7, "tab": [ "pytorch" ] }, "outputs": [], "source": [ "b1 = nn.Sequential(nn.Conv2d(1, 64, kernel_size=7, stride=2, padding=3),\n", " nn.ReLU(),\n", " nn.MaxPool2d(kernel_size=3, stride=2, padding=1))" ] }, { "cell_type": "markdown", "id": "b4d6d53a", "metadata": { "origin_pos": 10 }, "source": [ "第二个模块使用两个卷积层:第一个卷积层是64个通道、$1\\times 1$卷积层;第二个卷积层使用将通道数量增加三倍的$3\\times 3$卷积层。\n", "这对应于Inception块中的第二条路径。\n" ] }, { "cell_type": "code", "execution_count": 3, "id": "d5a1d1b0", "metadata": { "execution": { "iopub.execute_input": "2023-08-18T07:18:49.883499Z", "iopub.status.busy": "2023-08-18T07:18:49.882740Z", "iopub.status.idle": "2023-08-18T07:18:49.892430Z", "shell.execute_reply": "2023-08-18T07:18:49.891290Z" }, "origin_pos": 12, "tab": [ "pytorch" ] }, "outputs": [], "source": [ "b2 = nn.Sequential(nn.Conv2d(64, 64, kernel_size=1),\n", " nn.ReLU(),\n", " nn.Conv2d(64, 192, kernel_size=3, padding=1),\n", " nn.ReLU(),\n", " nn.MaxPool2d(kernel_size=3, stride=2, padding=1))" ] }, { "cell_type": "markdown", "id": "c5532216", "metadata": { "origin_pos": 15 }, "source": [ "第三个模块串联两个完整的Inception块。\n", "第一个Inception块的输出通道数为$64+128+32+32=256$,四个路径之间的输出通道数量比为$64:128:32:32=2:4:1:1$。\n", "第二个和第三个路径首先将输入通道的数量分别减少到$96/192=1/2$和$16/192=1/12$,然后连接第二个卷积层。第二个Inception块的输出通道数增加到$128+192+96+64=480$,四个路径之间的输出通道数量比为$128:192:96:64 = 4:6:3:2$。\n", "第二条和第三条路径首先将输入通道的数量分别减少到$128/256=1/2$和$32/256=1/8$。\n" ] }, { "cell_type": "code", "execution_count": 4, "id": "11b7cf54", "metadata": { "execution": { "iopub.execute_input": "2023-08-18T07:18:49.897526Z", "iopub.status.busy": "2023-08-18T07:18:49.896704Z", "iopub.status.idle": "2023-08-18T07:18:49.916207Z", "shell.execute_reply": "2023-08-18T07:18:49.915075Z" }, "origin_pos": 17, "tab": [ "pytorch" ] }, "outputs": [], "source": [ "b3 = nn.Sequential(Inception(192, 64, (96, 128), (16, 32), 32),\n", " Inception(256, 128, (128, 192), (32, 96), 64),\n", " nn.MaxPool2d(kernel_size=3, stride=2, padding=1))" ] }, { "cell_type": "markdown", "id": "f54c6828", "metadata": { "origin_pos": 20 }, "source": [ "第四模块更加复杂,\n", "它串联了5个Inception块,其输出通道数分别是$192+208+48+64=512$、$160+224+64+64=512$、$128+256+64+64=512$、$112+288+64+64=528$和$256+320+128+128=832$。\n", "这些路径的通道数分配和第三模块中的类似,首先是含$3×3$卷积层的第二条路径输出最多通道,其次是仅含$1×1$卷积层的第一条路径,之后是含$5×5$卷积层的第三条路径和含$3×3$最大汇聚层的第四条路径。\n", "其中第二、第三条路径都会先按比例减小通道数。\n", "这些比例在各个Inception块中都略有不同。\n" ] }, { "cell_type": "code", "execution_count": 5, "id": "b55bd896", "metadata": { "execution": { "iopub.execute_input": "2023-08-18T07:18:49.921459Z", "iopub.status.busy": "2023-08-18T07:18:49.920668Z", "iopub.status.idle": "2023-08-18T07:18:49.984376Z", "shell.execute_reply": "2023-08-18T07:18:49.982781Z" }, "origin_pos": 22, "tab": [ "pytorch" ] }, "outputs": [], "source": [ "b4 = nn.Sequential(Inception(480, 192, (96, 208), (16, 48), 64),\n", " Inception(512, 160, (112, 224), (24, 64), 64),\n", " Inception(512, 128, (128, 256), (24, 64), 64),\n", " Inception(512, 112, (144, 288), (32, 64), 64),\n", " Inception(528, 256, (160, 320), (32, 128), 128),\n", " nn.MaxPool2d(kernel_size=3, stride=2, padding=1))" ] }, { "cell_type": "markdown", "id": "2a44e596", "metadata": { "origin_pos": 25 }, "source": [ "第五模块包含输出通道数为$256+320+128+128=832$和$384+384+128+128=1024$的两个Inception块。\n", "其中每条路径通道数的分配思路和第三、第四模块中的一致,只是在具体数值上有所不同。\n", "需要注意的是,第五模块的后面紧跟输出层,该模块同NiN一样使用全局平均汇聚层,将每个通道的高和宽变成1。\n", "最后我们将输出变成二维数组,再接上一个输出个数为标签类别数的全连接层。\n" ] }, { "cell_type": "code", "execution_count": 6, "id": "8fc4cc33", "metadata": { "execution": { "iopub.execute_input": "2023-08-18T07:18:49.990643Z", "iopub.status.busy": "2023-08-18T07:18:49.989777Z", "iopub.status.idle": "2023-08-18T07:18:50.042949Z", "shell.execute_reply": "2023-08-18T07:18:50.041737Z" }, "origin_pos": 27, "tab": [ "pytorch" ] }, "outputs": [], "source": [ "b5 = nn.Sequential(Inception(832, 256, (160, 320), (32, 128), 128),\n", " Inception(832, 384, (192, 384), (48, 128), 128),\n", " nn.AdaptiveAvgPool2d((1,1)),\n", " nn.Flatten())\n", "\n", "net = nn.Sequential(b1, b2, b3, b4, b5, nn.Linear(1024, 10))" ] }, { "cell_type": "markdown", "id": "7e92ad19", "metadata": { "origin_pos": 30 }, "source": [ "GoogLeNet模型的计算复杂,而且不如VGG那样便于修改通道数。\n", "[**为了使Fashion-MNIST上的训练短小精悍,我们将输入的高和宽从224降到96**],这简化了计算。下面演示各个模块输出的形状变化。\n" ] }, { "cell_type": "code", "execution_count": 7, "id": "51845ad5", "metadata": { "execution": { "iopub.execute_input": "2023-08-18T07:18:50.048543Z", "iopub.status.busy": "2023-08-18T07:18:50.047764Z", "iopub.status.idle": "2023-08-18T07:18:50.254447Z", "shell.execute_reply": "2023-08-18T07:18:50.252245Z" }, "origin_pos": 32, "tab": [ "pytorch" ] }, "outputs": [ { "name": "stdout", "output_type": "stream", "text": [ "Sequential output shape:\t torch.Size([1, 64, 24, 24])\n", "Sequential output shape:\t torch.Size([1, 192, 12, 12])\n", "Sequential output shape:\t torch.Size([1, 480, 6, 6])\n", "Sequential output shape:\t torch.Size([1, 832, 3, 3])\n", "Sequential output shape:\t torch.Size([1, 1024])\n", "Linear output shape:\t torch.Size([1, 10])\n" ] } ], "source": [ "X = torch.rand(size=(1, 1, 96, 96))\n", "for layer in net:\n", " X = layer(X)\n", " print(layer.__class__.__name__,'output shape:\\t', X.shape)" ] }, { "cell_type": "markdown", "id": "1dfcf730", "metadata": { "origin_pos": 35 }, "source": [ "## [**训练模型**]\n", "\n", "和以前一样,我们使用Fashion-MNIST数据集来训练我们的模型。在训练之前,我们将图片转换为$96 \\times 96$分辨率。\n" ] }, { "cell_type": "code", "execution_count": 8, "id": "fe051d5b", "metadata": { "execution": { "iopub.execute_input": "2023-08-18T07:18:50.260612Z", "iopub.status.busy": "2023-08-18T07:18:50.259996Z", "iopub.status.idle": "2023-08-18T07:22:36.284598Z", "shell.execute_reply": "2023-08-18T07:22:36.283681Z" }, "origin_pos": 36, "tab": [ "pytorch" ] }, "outputs": [ { "name": "stdout", "output_type": "stream", "text": [ "loss 0.262, train acc 0.900, test acc 0.886\n", "3265.5 examples/sec on cuda:0\n" ] }, { "data": { "image/svg+xml": [ "\n", "\n", "\n", " \n", " \n", " \n", " \n", " 2023-08-18T07:22:36.241281\n", " image/svg+xml\n", " \n", " \n", " Matplotlib v3.5.1, https://matplotlib.org/\n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", "\n" ], "text/plain": [ "
" ] }, "metadata": { "needs_background": "light" }, "output_type": "display_data" } ], "source": [ "lr, num_epochs, batch_size = 0.1, 10, 128\n", "train_iter, test_iter = d2l.load_data_fashion_mnist(batch_size, resize=96)\n", "d2l.train_ch6(net, train_iter, test_iter, num_epochs, lr, d2l.try_gpu())" ] }, { "cell_type": "markdown", "id": "535046e9", "metadata": { "origin_pos": 37 }, "source": [ "## 小结\n", "\n", "* Inception块相当于一个有4条路径的子网络。它通过不同窗口形状的卷积层和最大汇聚层来并行抽取信息,并使用$1×1$卷积层减少每像素级别上的通道维数从而降低模型复杂度。\n", "* GoogLeNet将多个设计精细的Inception块与其他层(卷积层、全连接层)串联起来。其中Inception块的通道数分配之比是在ImageNet数据集上通过大量的实验得来的。\n", "* GoogLeNet和它的后继者们一度是ImageNet上最有效的模型之一:它以较低的计算复杂度提供了类似的测试精度。\n", "\n", "## 练习\n", "\n", "1. GoogLeNet有一些后续版本。尝试实现并运行它们,然后观察实验结果。这些后续版本包括:\n", " * 添加批量规范化层 :cite:`Ioffe.Szegedy.2015`(batch normalization),在 :numref:`sec_batch_norm`中将介绍;\n", " * 对Inception模块进行调整 :cite:`Szegedy.Vanhoucke.Ioffe.ea.2016`;\n", " * 使用标签平滑(label smoothing)进行模型正则化 :cite:`Szegedy.Vanhoucke.Ioffe.ea.2016`;\n", " * 加入残差连接 :cite:`Szegedy.Ioffe.Vanhoucke.ea.2017`。( :numref:`sec_resnet`将介绍)。\n", "1. 使用GoogLeNet的最小图像大小是多少?\n", "1. 将AlexNet、VGG和NiN的模型参数大小与GoogLeNet进行比较。后两个网络架构是如何显著减少模型参数大小的?\n" ] }, { "cell_type": "markdown", "id": "cf7d384c", "metadata": { "origin_pos": 39, "tab": [ "pytorch" ] }, "source": [ "[Discussions](https://discuss.d2l.ai/t/1871)\n" ] } ], "metadata": { "language_info": { "name": "python" }, "required_libs": [] }, "nbformat": 4, "nbformat_minor": 5 }