{
"cells": [
{
"cell_type": "markdown",
"id": "9f271f9b",
"metadata": {
"origin_pos": 0
},
"source": [
"# 稠密连接网络(DenseNet)\n",
"\n",
"ResNet极大地改变了如何参数化深层网络中函数的观点。\n",
"*稠密连接网络*(DenseNet) :cite:`Huang.Liu.Van-Der-Maaten.ea.2017`在某种程度上是ResNet的逻辑扩展。让我们先从数学上了解一下。\n",
"\n",
"## 从ResNet到DenseNet\n",
"\n",
"回想一下任意函数的泰勒展开式(Taylor expansion),它把这个函数分解成越来越高阶的项。在$x$接近0时,\n",
"\n",
"$$f(x) = f(0) + f'(0) x + \\frac{f''(0)}{2!} x^2 + \\frac{f'''(0)}{3!} x^3 + \\ldots.$$\n",
"\n",
"同样,ResNet将函数展开为\n",
"\n",
"$$f(\\mathbf{x}) = \\mathbf{x} + g(\\mathbf{x}).$$\n",
"\n",
"也就是说,ResNet将$f$分解为两部分:一个简单的线性项和一个复杂的非线性项。\n",
"那么再向前拓展一步,如果我们想将$f$拓展成超过两部分的信息呢?\n",
"一种方案便是DenseNet。\n",
"\n",
"\n",
":label:`fig_densenet_block`\n",
"\n",
"如 :numref:`fig_densenet_block`所示,ResNet和DenseNet的关键区别在于,DenseNet输出是*连接*(用图中的$[,]$表示)而不是如ResNet的简单相加。\n",
"因此,在应用越来越复杂的函数序列后,我们执行从$\\mathbf{x}$到其展开式的映射:\n",
"\n",
"$$\\mathbf{x} \\to \\left[\n",
"\\mathbf{x},\n",
"f_1(\\mathbf{x}),\n",
"f_2([\\mathbf{x}, f_1(\\mathbf{x})]), f_3([\\mathbf{x}, f_1(\\mathbf{x}), f_2([\\mathbf{x}, f_1(\\mathbf{x})])]), \\ldots\\right].$$\n",
"\n",
"最后,将这些展开式结合到多层感知机中,再次减少特征的数量。\n",
"实现起来非常简单:我们不需要添加术语,而是将它们连接起来。\n",
"DenseNet这个名字由变量之间的“稠密连接”而得来,最后一层与之前的所有层紧密相连。\n",
"稠密连接如 :numref:`fig_densenet`所示。\n",
"\n",
"\n",
":label:`fig_densenet`\n",
"\n",
"稠密网络主要由2部分构成:*稠密块*(dense block)和*过渡层*(transition layer)。\n",
"前者定义如何连接输入和输出,而后者则控制通道数量,使其不会太复杂。\n",
"\n",
"## (**稠密块体**)\n",
"\n",
"DenseNet使用了ResNet改良版的“批量规范化、激活和卷积”架构(参见 :numref:`sec_resnet`中的练习)。\n",
"我们首先实现一下这个架构。\n"
]
},
{
"cell_type": "code",
"execution_count": 1,
"id": "c0d77805",
"metadata": {
"execution": {
"iopub.execute_input": "2023-08-18T07:16:08.050557Z",
"iopub.status.busy": "2023-08-18T07:16:08.050029Z",
"iopub.status.idle": "2023-08-18T07:16:11.746531Z",
"shell.execute_reply": "2023-08-18T07:16:11.745702Z"
},
"origin_pos": 2,
"tab": [
"pytorch"
]
},
"outputs": [],
"source": [
"import torch\n",
"from torch import nn\n",
"from d2l import torch as d2l\n",
"\n",
"\n",
"def conv_block(input_channels, num_channels):\n",
" return nn.Sequential(\n",
" nn.BatchNorm2d(input_channels), nn.ReLU(),\n",
" nn.Conv2d(input_channels, num_channels, kernel_size=3, padding=1))"
]
},
{
"cell_type": "markdown",
"id": "64e078d7",
"metadata": {
"origin_pos": 5
},
"source": [
"一个*稠密块*由多个卷积块组成,每个卷积块使用相同数量的输出通道。\n",
"然而,在前向传播中,我们将每个卷积块的输入和输出在通道维上连结。\n"
]
},
{
"cell_type": "code",
"execution_count": 2,
"id": "26c9a602",
"metadata": {
"execution": {
"iopub.execute_input": "2023-08-18T07:16:11.750525Z",
"iopub.status.busy": "2023-08-18T07:16:11.750169Z",
"iopub.status.idle": "2023-08-18T07:16:11.756520Z",
"shell.execute_reply": "2023-08-18T07:16:11.755779Z"
},
"origin_pos": 7,
"tab": [
"pytorch"
]
},
"outputs": [],
"source": [
"class DenseBlock(nn.Module):\n",
" def __init__(self, num_convs, input_channels, num_channels):\n",
" super(DenseBlock, self).__init__()\n",
" layer = []\n",
" for i in range(num_convs):\n",
" layer.append(conv_block(\n",
" num_channels * i + input_channels, num_channels))\n",
" self.net = nn.Sequential(*layer)\n",
"\n",
" def forward(self, X):\n",
" for blk in self.net:\n",
" Y = blk(X)\n",
" # 连接通道维度上每个块的输入和输出\n",
" X = torch.cat((X, Y), dim=1)\n",
" return X"
]
},
{
"cell_type": "markdown",
"id": "6e0cf1d4",
"metadata": {
"origin_pos": 10
},
"source": [
"在下面的例子中,我们[**定义一个**]有2个输出通道数为10的(**`DenseBlock`**)。\n",
"使用通道数为3的输入时,我们会得到通道数为$3+2\\times 10=23$的输出。\n",
"卷积块的通道数控制了输出通道数相对于输入通道数的增长,因此也被称为*增长率*(growth rate)。\n"
]
},
{
"cell_type": "code",
"execution_count": 3,
"id": "6894a1d5",
"metadata": {
"execution": {
"iopub.execute_input": "2023-08-18T07:16:11.759699Z",
"iopub.status.busy": "2023-08-18T07:16:11.759433Z",
"iopub.status.idle": "2023-08-18T07:16:11.773609Z",
"shell.execute_reply": "2023-08-18T07:16:11.772841Z"
},
"origin_pos": 12,
"tab": [
"pytorch"
]
},
"outputs": [
{
"data": {
"text/plain": [
"torch.Size([4, 23, 8, 8])"
]
},
"execution_count": 3,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"blk = DenseBlock(2, 3, 10)\n",
"X = torch.randn(4, 3, 8, 8)\n",
"Y = blk(X)\n",
"Y.shape"
]
},
{
"cell_type": "markdown",
"id": "79590d57",
"metadata": {
"origin_pos": 15
},
"source": [
"## [**过渡层**]\n",
"\n",
"由于每个稠密块都会带来通道数的增加,使用过多则会过于复杂化模型。\n",
"而过渡层可以用来控制模型复杂度。\n",
"它通过$1\\times 1$卷积层来减小通道数,并使用步幅为2的平均汇聚层减半高和宽,从而进一步降低模型复杂度。\n"
]
},
{
"cell_type": "code",
"execution_count": 4,
"id": "19c97dd5",
"metadata": {
"execution": {
"iopub.execute_input": "2023-08-18T07:16:11.778396Z",
"iopub.status.busy": "2023-08-18T07:16:11.778129Z",
"iopub.status.idle": "2023-08-18T07:16:11.782692Z",
"shell.execute_reply": "2023-08-18T07:16:11.781920Z"
},
"origin_pos": 17,
"tab": [
"pytorch"
]
},
"outputs": [],
"source": [
"def transition_block(input_channels, num_channels):\n",
" return nn.Sequential(\n",
" nn.BatchNorm2d(input_channels), nn.ReLU(),\n",
" nn.Conv2d(input_channels, num_channels, kernel_size=1),\n",
" nn.AvgPool2d(kernel_size=2, stride=2))"
]
},
{
"cell_type": "markdown",
"id": "911d280a",
"metadata": {
"origin_pos": 20
},
"source": [
"对上一个例子中稠密块的输出[**使用**]通道数为10的[**过渡层**]。\n",
"此时输出的通道数减为10,高和宽均减半。\n"
]
},
{
"cell_type": "code",
"execution_count": 5,
"id": "7ca47bbc",
"metadata": {
"execution": {
"iopub.execute_input": "2023-08-18T07:16:11.786750Z",
"iopub.status.busy": "2023-08-18T07:16:11.786485Z",
"iopub.status.idle": "2023-08-18T07:16:11.794052Z",
"shell.execute_reply": "2023-08-18T07:16:11.792935Z"
},
"origin_pos": 22,
"tab": [
"pytorch"
]
},
"outputs": [
{
"data": {
"text/plain": [
"torch.Size([4, 10, 4, 4])"
]
},
"execution_count": 5,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"blk = transition_block(23, 10)\n",
"blk(Y).shape"
]
},
{
"cell_type": "markdown",
"id": "a4994898",
"metadata": {
"origin_pos": 24
},
"source": [
"## [**DenseNet模型**]\n",
"\n",
"我们来构造DenseNet模型。DenseNet首先使用同ResNet一样的单卷积层和最大汇聚层。\n"
]
},
{
"cell_type": "code",
"execution_count": 6,
"id": "2592cd17",
"metadata": {
"execution": {
"iopub.execute_input": "2023-08-18T07:16:11.799267Z",
"iopub.status.busy": "2023-08-18T07:16:11.798419Z",
"iopub.status.idle": "2023-08-18T07:16:11.805699Z",
"shell.execute_reply": "2023-08-18T07:16:11.804494Z"
},
"origin_pos": 26,
"tab": [
"pytorch"
]
},
"outputs": [],
"source": [
"b1 = nn.Sequential(\n",
" nn.Conv2d(1, 64, kernel_size=7, stride=2, padding=3),\n",
" nn.BatchNorm2d(64), nn.ReLU(),\n",
" nn.MaxPool2d(kernel_size=3, stride=2, padding=1))"
]
},
{
"cell_type": "markdown",
"id": "1480601a",
"metadata": {
"origin_pos": 29
},
"source": [
"接下来,类似于ResNet使用的4个残差块,DenseNet使用的是4个稠密块。\n",
"与ResNet类似,我们可以设置每个稠密块使用多少个卷积层。\n",
"这里我们设成4,从而与 :numref:`sec_resnet`的ResNet-18保持一致。\n",
"稠密块里的卷积层通道数(即增长率)设为32,所以每个稠密块将增加128个通道。\n",
"\n",
"在每个模块之间,ResNet通过步幅为2的残差块减小高和宽,DenseNet则使用过渡层来减半高和宽,并减半通道数。\n"
]
},
{
"cell_type": "code",
"execution_count": 7,
"id": "572c0b37",
"metadata": {
"execution": {
"iopub.execute_input": "2023-08-18T07:16:11.810681Z",
"iopub.status.busy": "2023-08-18T07:16:11.809914Z",
"iopub.status.idle": "2023-08-18T07:16:11.835094Z",
"shell.execute_reply": "2023-08-18T07:16:11.834042Z"
},
"origin_pos": 31,
"tab": [
"pytorch"
]
},
"outputs": [],
"source": [
"# num_channels为当前的通道数\n",
"num_channels, growth_rate = 64, 32\n",
"num_convs_in_dense_blocks = [4, 4, 4, 4]\n",
"blks = []\n",
"for i, num_convs in enumerate(num_convs_in_dense_blocks):\n",
" blks.append(DenseBlock(num_convs, num_channels, growth_rate))\n",
" # 上一个稠密块的输出通道数\n",
" num_channels += num_convs * growth_rate\n",
" # 在稠密块之间添加一个转换层,使通道数量减半\n",
" if i != len(num_convs_in_dense_blocks) - 1:\n",
" blks.append(transition_block(num_channels, num_channels // 2))\n",
" num_channels = num_channels // 2"
]
},
{
"cell_type": "markdown",
"id": "10c456d6",
"metadata": {
"origin_pos": 34
},
"source": [
"与ResNet类似,最后接上全局汇聚层和全连接层来输出结果。\n"
]
},
{
"cell_type": "code",
"execution_count": 8,
"id": "2ea5a6f7",
"metadata": {
"execution": {
"iopub.execute_input": "2023-08-18T07:16:11.840349Z",
"iopub.status.busy": "2023-08-18T07:16:11.839579Z",
"iopub.status.idle": "2023-08-18T07:16:11.847204Z",
"shell.execute_reply": "2023-08-18T07:16:11.846173Z"
},
"origin_pos": 36,
"tab": [
"pytorch"
]
},
"outputs": [],
"source": [
"net = nn.Sequential(\n",
" b1, *blks,\n",
" nn.BatchNorm2d(num_channels), nn.ReLU(),\n",
" nn.AdaptiveAvgPool2d((1, 1)),\n",
" nn.Flatten(),\n",
" nn.Linear(num_channels, 10))"
]
},
{
"cell_type": "markdown",
"id": "c9ac6a83",
"metadata": {
"origin_pos": 39
},
"source": [
"## [**训练模型**]\n",
"\n",
"由于这里使用了比较深的网络,本节里我们将输入高和宽从224降到96来简化计算。\n"
]
},
{
"cell_type": "code",
"execution_count": 9,
"id": "dab03cd3",
"metadata": {
"execution": {
"iopub.execute_input": "2023-08-18T07:16:11.852215Z",
"iopub.status.busy": "2023-08-18T07:16:11.851453Z",
"iopub.status.idle": "2023-08-18T07:18:39.645340Z",
"shell.execute_reply": "2023-08-18T07:18:39.643627Z"
},
"origin_pos": 40,
"tab": [
"pytorch"
]
},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"loss 0.140, train acc 0.948, test acc 0.885\n",
"5626.3 examples/sec on cuda:0\n"
]
},
{
"data": {
"image/svg+xml": [
"\n",
"\n",
"\n"
],
"text/plain": [
""
]
},
"metadata": {
"needs_background": "light"
},
"output_type": "display_data"
}
],
"source": [
"lr, num_epochs, batch_size = 0.1, 10, 256\n",
"train_iter, test_iter = d2l.load_data_fashion_mnist(batch_size, resize=96)\n",
"d2l.train_ch6(net, train_iter, test_iter, num_epochs, lr, d2l.try_gpu())"
]
},
{
"cell_type": "markdown",
"id": "459b1b01",
"metadata": {
"origin_pos": 41
},
"source": [
"## 小结\n",
"\n",
"* 在跨层连接上,不同于ResNet中将输入与输出相加,稠密连接网络(DenseNet)在通道维上连结输入与输出。\n",
"* DenseNet的主要构建模块是稠密块和过渡层。\n",
"* 在构建DenseNet时,我们需要通过添加过渡层来控制网络的维数,从而再次减少通道的数量。\n",
"\n",
"## 练习\n",
"\n",
"1. 为什么我们在过渡层使用平均汇聚层而不是最大汇聚层?\n",
"1. DenseNet的优点之一是其模型参数比ResNet小。为什么呢?\n",
"1. DenseNet一个诟病的问题是内存或显存消耗过多。\n",
" 1. 真的是这样吗?可以把输入形状换成$224 \\times 224$,来看看实际的显存消耗。\n",
" 1. 有另一种方法来减少显存消耗吗?需要改变框架么?\n",
"1. 实现DenseNet论文 :cite:`Huang.Liu.Van-Der-Maaten.ea.2017`表1所示的不同DenseNet版本。\n",
"1. 应用DenseNet的思想设计一个基于多层感知机的模型。将其应用于 :numref:`sec_kaggle_house`中的房价预测任务。\n"
]
},
{
"cell_type": "markdown",
"id": "710e8fed",
"metadata": {
"origin_pos": 43,
"tab": [
"pytorch"
]
},
"source": [
"[Discussions](https://discuss.d2l.ai/t/1880)\n"
]
}
],
"metadata": {
"language_info": {
"name": "python"
},
"required_libs": []
},
"nbformat": 4,
"nbformat_minor": 5
}