This commit is contained in:
2025-12-16 09:23:53 +08:00
parent 19138d3cc1
commit 9e7efd0626
409 changed files with 272713 additions and 241 deletions
@@ -0,0 +1,584 @@
{
"cells": [
{
"cell_type": "markdown",
"id": "8082691a",
"metadata": {
"origin_pos": 0
},
"source": [
"# 转置卷积\n",
":label:`sec_transposed_conv`\n",
"\n",
"到目前为止,我们所见到的卷积神经网络层,例如卷积层( :numref:`sec_conv_layer`)和汇聚层( :numref:`sec_pooling`),通常会减少下采样输入图像的空间维度(高和宽)。\n",
"然而如果输入和输出图像的空间维度相同,在以像素级分类的语义分割中将会很方便。\n",
"例如,输出像素所处的通道维可以保有输入像素在同一位置上的分类结果。\n",
"\n",
"为了实现这一点,尤其是在空间维度被卷积神经网络层缩小后,我们可以使用另一种类型的卷积神经网络层,它可以增加上采样中间层特征图的空间维度。\n",
"本节将介绍\n",
"*转置卷积*transposed convolution :cite:`Dumoulin.Visin.2016`\n",
"用于逆转下采样导致的空间尺寸减小。\n"
]
},
{
"cell_type": "code",
"execution_count": 1,
"id": "1f39b5ef",
"metadata": {
"execution": {
"iopub.execute_input": "2023-08-18T07:05:22.451701Z",
"iopub.status.busy": "2023-08-18T07:05:22.451411Z",
"iopub.status.idle": "2023-08-18T07:05:24.490785Z",
"shell.execute_reply": "2023-08-18T07:05:24.489970Z"
},
"origin_pos": 2,
"tab": [
"pytorch"
]
},
"outputs": [],
"source": [
"import torch\n",
"from torch import nn\n",
"from d2l import torch as d2l"
]
},
{
"cell_type": "markdown",
"id": "f1007d54",
"metadata": {
"origin_pos": 4
},
"source": [
"## 基本操作\n",
"\n",
"让我们暂时忽略通道,从基本的转置卷积开始,设步幅为1且没有填充。\n",
"假设我们有一个$n_h \\times n_w$的输入张量和一个$k_h \\times k_w$的卷积核。\n",
"以步幅为1滑动卷积核窗口,每行$n_w$次,每列$n_h$次,共产生$n_h n_w$个中间结果。\n",
"每个中间结果都是一个$(n_h + k_h - 1) \\times (n_w + k_w - 1)$的张量,初始化为0。\n",
"为了计算每个中间张量,输入张量中的每个元素都要乘以卷积核,从而使所得的$k_h \\times k_w$张量替换中间张量的一部分。\n",
"请注意,每个中间张量被替换部分的位置与输入张量中元素的位置相对应。\n",
"最后,所有中间结果相加以获得最终结果。\n",
"\n",
"例如, :numref:`fig_trans_conv`解释了如何为$2\\times 2$的输入张量计算卷积核为$2\\times 2$的转置卷积。\n",
"\n",
"![卷积核为 $2\\times 2$ 的转置卷积。阴影部分是中间张量的一部分,也是用于计算的输入和卷积核张量元素。 ](../img/trans_conv.svg)\n",
":label:`fig_trans_conv`\n",
"\n",
"我们可以对输入矩阵`X`和卷积核矩阵`K`(**实现基本的转置卷积运算**)`trans_conv`。\n"
]
},
{
"cell_type": "code",
"execution_count": 2,
"id": "e6931d90",
"metadata": {
"execution": {
"iopub.execute_input": "2023-08-18T07:05:24.494981Z",
"iopub.status.busy": "2023-08-18T07:05:24.494307Z",
"iopub.status.idle": "2023-08-18T07:05:24.499745Z",
"shell.execute_reply": "2023-08-18T07:05:24.498885Z"
},
"origin_pos": 5,
"tab": [
"pytorch"
]
},
"outputs": [],
"source": [
"def trans_conv(X, K):\n",
" h, w = K.shape\n",
" Y = torch.zeros((X.shape[0] + h - 1, X.shape[1] + w - 1))\n",
" for i in range(X.shape[0]):\n",
" for j in range(X.shape[1]):\n",
" Y[i: i + h, j: j + w] += X[i, j] * K\n",
" return Y"
]
},
{
"cell_type": "markdown",
"id": "6d64431b",
"metadata": {
"origin_pos": 6
},
"source": [
"与通过卷积核“减少”输入元素的常规卷积(在 :numref:`sec_conv_layer`中)相比,转置卷积通过卷积核“广播”输入元素,从而产生大于输入的输出。\n",
"我们可以通过 :numref:`fig_trans_conv`来构建输入张量`X`和卷积核张量`K`从而[**验证上述实现输出**]。\n",
"此实现是基本的二维转置卷积运算。\n"
]
},
{
"cell_type": "code",
"execution_count": 3,
"id": "a7c6e2fd",
"metadata": {
"execution": {
"iopub.execute_input": "2023-08-18T07:05:24.503202Z",
"iopub.status.busy": "2023-08-18T07:05:24.502646Z",
"iopub.status.idle": "2023-08-18T07:05:24.531448Z",
"shell.execute_reply": "2023-08-18T07:05:24.530730Z"
},
"origin_pos": 7,
"tab": [
"pytorch"
]
},
"outputs": [
{
"data": {
"text/plain": [
"tensor([[ 0., 0., 1.],\n",
" [ 0., 4., 6.],\n",
" [ 4., 12., 9.]])"
]
},
"execution_count": 3,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"X = torch.tensor([[0.0, 1.0], [2.0, 3.0]])\n",
"K = torch.tensor([[0.0, 1.0], [2.0, 3.0]])\n",
"trans_conv(X, K)"
]
},
{
"cell_type": "markdown",
"id": "c6698e0d",
"metadata": {
"origin_pos": 8
},
"source": [
"或者,当输入`X`和卷积核`K`都是四维张量时,我们可以[**使用高级API获得相同的结果**]。\n"
]
},
{
"cell_type": "code",
"execution_count": 4,
"id": "b9de6d80",
"metadata": {
"execution": {
"iopub.execute_input": "2023-08-18T07:05:24.535386Z",
"iopub.status.busy": "2023-08-18T07:05:24.534826Z",
"iopub.status.idle": "2023-08-18T07:05:24.544484Z",
"shell.execute_reply": "2023-08-18T07:05:24.543747Z"
},
"origin_pos": 10,
"tab": [
"pytorch"
]
},
"outputs": [
{
"data": {
"text/plain": [
"tensor([[[[ 0., 0., 1.],\n",
" [ 0., 4., 6.],\n",
" [ 4., 12., 9.]]]], grad_fn=<ConvolutionBackward0>)"
]
},
"execution_count": 4,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"X, K = X.reshape(1, 1, 2, 2), K.reshape(1, 1, 2, 2)\n",
"tconv = nn.ConvTranspose2d(1, 1, kernel_size=2, bias=False)\n",
"tconv.weight.data = K\n",
"tconv(X)"
]
},
{
"cell_type": "markdown",
"id": "80936d2e",
"metadata": {
"origin_pos": 12
},
"source": [
"## [**填充、步幅和多通道**]\n",
"\n",
"与常规卷积不同,在转置卷积中,填充被应用于的输出(常规卷积将填充应用于输入)。\n",
"例如,当将高和宽两侧的填充数指定为1时,转置卷积的输出中将删除第一和最后的行与列。\n"
]
},
{
"cell_type": "code",
"execution_count": 5,
"id": "cd114de1",
"metadata": {
"execution": {
"iopub.execute_input": "2023-08-18T07:05:24.548040Z",
"iopub.status.busy": "2023-08-18T07:05:24.547398Z",
"iopub.status.idle": "2023-08-18T07:05:24.553659Z",
"shell.execute_reply": "2023-08-18T07:05:24.552864Z"
},
"origin_pos": 14,
"tab": [
"pytorch"
]
},
"outputs": [
{
"data": {
"text/plain": [
"tensor([[[[4.]]]], grad_fn=<ConvolutionBackward0>)"
]
},
"execution_count": 5,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"tconv = nn.ConvTranspose2d(1, 1, kernel_size=2, padding=1, bias=False)\n",
"tconv.weight.data = K\n",
"tconv(X)"
]
},
{
"cell_type": "markdown",
"id": "22272c8b",
"metadata": {
"origin_pos": 16
},
"source": [
"在转置卷积中,步幅被指定为中间结果(输出),而不是输入。\n",
"使用 :numref:`fig_trans_conv`中相同输入和卷积核张量,将步幅从1更改为2会增加中间张量的高和权重,因此输出张量在 :numref:`fig_trans_conv_stride2`中。\n",
"\n",
"![卷积核为$2\\times 2$,步幅为2的转置卷积。阴影部分是中间张量的一部分,也是用于计算的输入和卷积核张量元素。](../img/trans_conv_stride2.svg)\n",
":label:`fig_trans_conv_stride2`\n",
"\n",
"以下代码可以验证 :numref:`fig_trans_conv_stride2`中步幅为2的转置卷积的输出。\n"
]
},
{
"cell_type": "code",
"execution_count": 6,
"id": "48064406",
"metadata": {
"execution": {
"iopub.execute_input": "2023-08-18T07:05:24.557362Z",
"iopub.status.busy": "2023-08-18T07:05:24.556727Z",
"iopub.status.idle": "2023-08-18T07:05:24.563081Z",
"shell.execute_reply": "2023-08-18T07:05:24.562365Z"
},
"origin_pos": 18,
"tab": [
"pytorch"
]
},
"outputs": [
{
"data": {
"text/plain": [
"tensor([[[[0., 0., 0., 1.],\n",
" [0., 0., 2., 3.],\n",
" [0., 2., 0., 3.],\n",
" [4., 6., 6., 9.]]]], grad_fn=<ConvolutionBackward0>)"
]
},
"execution_count": 6,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"tconv = nn.ConvTranspose2d(1, 1, kernel_size=2, stride=2, bias=False)\n",
"tconv.weight.data = K\n",
"tconv(X)"
]
},
{
"cell_type": "markdown",
"id": "79ac62fd",
"metadata": {
"origin_pos": 20
},
"source": [
"对于多个输入和输出通道,转置卷积与常规卷积以相同方式运作。\n",
"假设输入有$c_i$个通道,且转置卷积为每个输入通道分配了一个$k_h\\times k_w$的卷积核张量。\n",
"当指定多个输出通道时,每个输出通道将有一个$c_i\\times k_h\\times k_w$的卷积核。\n",
"\n",
"同样,如果我们将$\\mathsf{X}$代入卷积层$f$来输出$\\mathsf{Y}=f(\\mathsf{X})$,并创建一个与$f$具有相同的超参数、但输出通道数量是$\\mathsf{X}$中通道数的转置卷积层$g$,那么$g(Y)$的形状将与$\\mathsf{X}$相同。\n",
"下面的示例可以解释这一点。\n"
]
},
{
"cell_type": "code",
"execution_count": 7,
"id": "5e7033d7",
"metadata": {
"execution": {
"iopub.execute_input": "2023-08-18T07:05:24.566613Z",
"iopub.status.busy": "2023-08-18T07:05:24.565990Z",
"iopub.status.idle": "2023-08-18T07:05:24.577437Z",
"shell.execute_reply": "2023-08-18T07:05:24.576434Z"
},
"origin_pos": 22,
"tab": [
"pytorch"
]
},
"outputs": [
{
"data": {
"text/plain": [
"True"
]
},
"execution_count": 7,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"X = torch.rand(size=(1, 10, 16, 16))\n",
"conv = nn.Conv2d(10, 20, kernel_size=5, padding=2, stride=3)\n",
"tconv = nn.ConvTranspose2d(20, 10, kernel_size=5, padding=2, stride=3)\n",
"tconv(conv(X)).shape == X.shape"
]
},
{
"cell_type": "markdown",
"id": "9908cdc8",
"metadata": {
"origin_pos": 24
},
"source": [
"## [**与矩阵变换的联系**]\n",
":label:`subsec-connection-to-mat-transposition`\n",
"\n",
"转置卷积为何以矩阵变换命名呢?\n",
"让我们首先看看如何使用矩阵乘法来实现卷积。\n",
"在下面的示例中,我们定义了一个$3\\times 3$的输入`X`和$2\\times 2$卷积核`K`,然后使用`corr2d`函数计算卷积输出`Y`。\n"
]
},
{
"cell_type": "code",
"execution_count": 8,
"id": "260d5c6d",
"metadata": {
"execution": {
"iopub.execute_input": "2023-08-18T07:05:24.581485Z",
"iopub.status.busy": "2023-08-18T07:05:24.580866Z",
"iopub.status.idle": "2023-08-18T07:05:24.589179Z",
"shell.execute_reply": "2023-08-18T07:05:24.588233Z"
},
"origin_pos": 25,
"tab": [
"pytorch"
]
},
"outputs": [
{
"data": {
"text/plain": [
"tensor([[27., 37.],\n",
" [57., 67.]])"
]
},
"execution_count": 8,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"X = torch.arange(9.0).reshape(3, 3)\n",
"K = torch.tensor([[1.0, 2.0], [3.0, 4.0]])\n",
"Y = d2l.corr2d(X, K)\n",
"Y"
]
},
{
"cell_type": "markdown",
"id": "d5cb87b2",
"metadata": {
"origin_pos": 27
},
"source": [
"接下来,我们将卷积核`K`重写为包含大量0的稀疏权重矩阵`W`。\n",
"权重矩阵的形状是($4$,$9$),其中非0元素来自卷积核`K`。\n"
]
},
{
"cell_type": "code",
"execution_count": 9,
"id": "d9f6ce2b",
"metadata": {
"execution": {
"iopub.execute_input": "2023-08-18T07:05:24.592769Z",
"iopub.status.busy": "2023-08-18T07:05:24.592164Z",
"iopub.status.idle": "2023-08-18T07:05:24.602392Z",
"shell.execute_reply": "2023-08-18T07:05:24.601439Z"
},
"origin_pos": 28,
"tab": [
"pytorch"
]
},
"outputs": [
{
"data": {
"text/plain": [
"tensor([[1., 2., 0., 3., 4., 0., 0., 0., 0.],\n",
" [0., 1., 2., 0., 3., 4., 0., 0., 0.],\n",
" [0., 0., 0., 1., 2., 0., 3., 4., 0.],\n",
" [0., 0., 0., 0., 1., 2., 0., 3., 4.]])"
]
},
"execution_count": 9,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"def kernel2matrix(K):\n",
" k, W = torch.zeros(5), torch.zeros((4, 9))\n",
" k[:2], k[3:5] = K[0, :], K[1, :]\n",
" W[0, :5], W[1, 1:6], W[2, 3:8], W[3, 4:] = k, k, k, k\n",
" return W\n",
"\n",
"W = kernel2matrix(K)\n",
"W"
]
},
{
"cell_type": "markdown",
"id": "12f9b037",
"metadata": {
"origin_pos": 30
},
"source": [
"逐行连结输入`X`,获得了一个长度为9的矢量。\n",
"然后,`W`的矩阵乘法和向量化的`X`给出了一个长度为4的向量。\n",
"重塑它之后,可以获得与上面的原始卷积操作所得相同的结果`Y`:我们刚刚使用矩阵乘法实现了卷积。\n"
]
},
{
"cell_type": "code",
"execution_count": 10,
"id": "1fb803d0",
"metadata": {
"execution": {
"iopub.execute_input": "2023-08-18T07:05:24.606249Z",
"iopub.status.busy": "2023-08-18T07:05:24.605496Z",
"iopub.status.idle": "2023-08-18T07:05:24.612872Z",
"shell.execute_reply": "2023-08-18T07:05:24.611900Z"
},
"origin_pos": 31,
"tab": [
"pytorch"
]
},
"outputs": [
{
"data": {
"text/plain": [
"tensor([[True, True],\n",
" [True, True]])"
]
},
"execution_count": 10,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"Y == torch.matmul(W, X.reshape(-1)).reshape(2, 2)"
]
},
{
"cell_type": "markdown",
"id": "27394a2c",
"metadata": {
"origin_pos": 33
},
"source": [
"同样,我们可以使用矩阵乘法来实现转置卷积。\n",
"在下面的示例中,我们将上面的常规卷积$2 \\times 2$的输出`Y`作为转置卷积的输入。\n",
"想要通过矩阵相乘来实现它,我们只需要将权重矩阵`W`的形状转置为$(9, 4)$。\n"
]
},
{
"cell_type": "code",
"execution_count": 11,
"id": "f1a55ff1",
"metadata": {
"execution": {
"iopub.execute_input": "2023-08-18T07:05:24.616575Z",
"iopub.status.busy": "2023-08-18T07:05:24.615826Z",
"iopub.status.idle": "2023-08-18T07:05:24.623063Z",
"shell.execute_reply": "2023-08-18T07:05:24.622144Z"
},
"origin_pos": 34,
"tab": [
"pytorch"
]
},
"outputs": [
{
"data": {
"text/plain": [
"tensor([[True, True, True],\n",
" [True, True, True],\n",
" [True, True, True]])"
]
},
"execution_count": 11,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"Z = trans_conv(Y, K)\n",
"Z == torch.matmul(W.T, Y.reshape(-1)).reshape(3, 3)"
]
},
{
"cell_type": "markdown",
"id": "9614cf7b",
"metadata": {
"origin_pos": 36
},
"source": [
"抽象来看,给定输入向量$\\mathbf{x}$和权重矩阵$\\mathbf{W}$,卷积的前向传播函数可以通过将其输入与权重矩阵相乘并输出向量$\\mathbf{y}=\\mathbf{W}\\mathbf{x}$来实现。\n",
"由于反向传播遵循链式法则和$\\nabla_{\\mathbf{x}}\\mathbf{y}=\\mathbf{W}^\\top$,卷积的反向传播函数可以通过将其输入与转置的权重矩阵$\\mathbf{W}^\\top$相乘来实现。\n",
"因此,转置卷积层能够交换卷积层的正向传播函数和反向传播函数:它的正向传播和反向传播函数将输入向量分别与$\\mathbf{W}^\\top$和$\\mathbf{W}$相乘。\n",
"\n",
"## 小结\n",
"\n",
"* 与通过卷积核减少输入元素的常规卷积相反,转置卷积通过卷积核广播输入元素,从而产生形状大于输入的输出。\n",
"* 如果我们将$\\mathsf{X}$输入卷积层$f$来获得输出$\\mathsf{Y}=f(\\mathsf{X})$并创造一个与$f$有相同的超参数、但输出通道数是$\\mathsf{X}$中通道数的转置卷积层$g$,那么$g(Y)$的形状将与$\\mathsf{X}$相同。\n",
"* 我们可以使用矩阵乘法来实现卷积。转置卷积层能够交换卷积层的正向传播函数和反向传播函数。\n",
"\n",
"## 练习\n",
"\n",
"1. 在 :numref:`subsec-connection-to-mat-transposition`中,卷积输入`X`和转置的卷积输出`Z`具有相同的形状。他们的数值也相同吗?为什么?\n",
"1. 使用矩阵乘法来实现卷积是否有效率?为什么?\n"
]
},
{
"cell_type": "markdown",
"id": "bcd86378",
"metadata": {
"origin_pos": 38,
"tab": [
"pytorch"
]
},
"source": [
"[Discussions](https://discuss.d2l.ai/t/3302)\n"
]
}
],
"metadata": {
"language_info": {
"name": "python"
},
"required_libs": []
},
"nbformat": 4,
"nbformat_minor": 5
}