{ "cells": [ { "cell_type": "markdown", "id": "b0211d46", "metadata": { "origin_pos": 0 }, "source": [ "# 数值稳定性和模型初始化\n", ":label:`sec_numerical_stability`\n", "\n", "到目前为止,我们实现的每个模型都是根据某个预先指定的分布来初始化模型的参数。\n", "有人会认为初始化方案是理所当然的,忽略了如何做出这些选择的细节。甚至有人可能会觉得,初始化方案的选择并不是特别重要。\n", "相反,初始化方案的选择在神经网络学习中起着举足轻重的作用,\n", "它对保持数值稳定性至关重要。\n", "此外,这些初始化方案的选择可以与非线性激活函数的选择有趣的结合在一起。\n", "我们选择哪个函数以及如何初始化参数可以决定优化算法收敛的速度有多快。\n", "糟糕选择可能会导致我们在训练时遇到梯度爆炸或梯度消失。\n", "本节将更详细地探讨这些主题,并讨论一些有用的启发式方法。\n", "这些启发式方法在整个深度学习生涯中都很有用。\n", "\n", "## 梯度消失和梯度爆炸\n", "\n", "考虑一个具有$L$层、输入$\\mathbf{x}$和输出$\\mathbf{o}$的深层网络。\n", "每一层$l$由变换$f_l$定义,\n", "该变换的参数为权重$\\mathbf{W}^{(l)}$,\n", "其隐藏变量是$\\mathbf{h}^{(l)}$(令 $\\mathbf{h}^{(0)} = \\mathbf{x}$)。\n", "我们的网络可以表示为:\n", "\n", "$$\\mathbf{h}^{(l)} = f_l (\\mathbf{h}^{(l-1)}) \\text{ 因此 } \\mathbf{o} = f_L \\circ \\ldots \\circ f_1(\\mathbf{x}).$$\n", "\n", "如果所有隐藏变量和输入都是向量,\n", "我们可以将$\\mathbf{o}$关于任何一组参数$\\mathbf{W}^{(l)}$的梯度写为下式:\n", "\n", "$$\\partial_{\\mathbf{W}^{(l)}} \\mathbf{o} = \\underbrace{\\partial_{\\mathbf{h}^{(L-1)}} \\mathbf{h}^{(L)}}_{ \\mathbf{M}^{(L)} \\stackrel{\\mathrm{def}}{=}} \\cdot \\ldots \\cdot \\underbrace{\\partial_{\\mathbf{h}^{(l)}} \\mathbf{h}^{(l+1)}}_{ \\mathbf{M}^{(l+1)} \\stackrel{\\mathrm{def}}{=}} \\underbrace{\\partial_{\\mathbf{W}^{(l)}} \\mathbf{h}^{(l)}}_{ \\mathbf{v}^{(l)} \\stackrel{\\mathrm{def}}{=}}.$$\n", "\n", "换言之,该梯度是$L-l$个矩阵\n", "$\\mathbf{M}^{(L)} \\cdot \\ldots \\cdot \\mathbf{M}^{(l+1)}$\n", "与梯度向量 $\\mathbf{v}^{(l)}$的乘积。\n", "因此,我们容易受到数值下溢问题的影响.\n", "当将太多的概率乘在一起时,这些问题经常会出现。\n", "在处理概率时,一个常见的技巧是切换到对数空间,\n", "即将数值表示的压力从尾数转移到指数。\n", "不幸的是,上面的问题更为严重:\n", "最初,矩阵 $\\mathbf{M}^{(l)}$ 可能具有各种各样的特征值。\n", "他们可能很小,也可能很大;\n", "他们的乘积可能非常大,也可能非常小。\n", "\n", "不稳定梯度带来的风险不止在于数值表示;\n", "不稳定梯度也威胁到我们优化算法的稳定性。\n", "我们可能面临一些问题。\n", "要么是*梯度爆炸*(gradient exploding)问题:\n", "参数更新过大,破坏了模型的稳定收敛;\n", "要么是*梯度消失*(gradient vanishing)问题:\n", "参数更新过小,在每次更新时几乎不会移动,导致模型无法学习。\n", "\n", "### (**梯度消失**)\n", "\n", "曾经sigmoid函数$1/(1 + \\exp(-x))$( :numref:`sec_mlp`提到过)很流行,\n", "因为它类似于阈值函数。\n", "由于早期的人工神经网络受到生物神经网络的启发,\n", "神经元要么完全激活要么完全不激活(就像生物神经元)的想法很有吸引力。\n", "然而,它却是导致梯度消失问题的一个常见的原因,\n", "让我们仔细看看sigmoid函数为什么会导致梯度消失。\n" ] }, { "cell_type": "code", "execution_count": 1, "id": "4b473c67", "metadata": { "execution": { "iopub.execute_input": "2023-08-18T07:03:31.292874Z", "iopub.status.busy": "2023-08-18T07:03:31.292041Z", "iopub.status.idle": "2023-08-18T07:03:34.470038Z", "shell.execute_reply": "2023-08-18T07:03:34.469058Z" }, "origin_pos": 2, "tab": [ "pytorch" ] }, "outputs": [ { "data": { "image/svg+xml": [ "\n", "\n", "\n", " \n", " \n", " \n", " \n", " 2023-08-18T07:03:34.416770\n", " image/svg+xml\n", " \n", " \n", " Matplotlib v3.5.1, https://matplotlib.org/\n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", "\n" ], "text/plain": [ "
" ] }, "metadata": { "needs_background": "light" }, "output_type": "display_data" } ], "source": [ "%matplotlib inline\n", "import torch\n", "from d2l import torch as d2l\n", "\n", "x = torch.arange(-8.0, 8.0, 0.1, requires_grad=True)\n", "y = torch.sigmoid(x)\n", "y.backward(torch.ones_like(x))\n", "\n", "d2l.plot(x.detach().numpy(), [y.detach().numpy(), x.grad.numpy()],\n", " legend=['sigmoid', 'gradient'], figsize=(4.5, 2.5))" ] }, { "cell_type": "markdown", "id": "121269aa", "metadata": { "origin_pos": 5 }, "source": [ "正如上图,当sigmoid函数的输入很大或是很小时,它的梯度都会消失。\n", "此外,当反向传播通过许多层时,除非我们在刚刚好的地方,\n", "这些地方sigmoid函数的输入接近于零,否则整个乘积的梯度可能会消失。\n", "当我们的网络有很多层时,除非我们很小心,否则在某一层可能会切断梯度。\n", "事实上,这个问题曾经困扰着深度网络的训练。\n", "因此,更稳定的ReLU系列函数已经成为从业者的默认选择(虽然在神经科学的角度看起来不太合理)。\n", "\n", "### [**梯度爆炸**]\n", "\n", "相反,梯度爆炸可能同样令人烦恼。\n", "为了更好地说明这一点,我们生成100个高斯随机矩阵,并将它们与某个初始矩阵相乘。\n", "对于我们选择的尺度(方差$\\sigma^2=1$),矩阵乘积发生爆炸。\n", "当这种情况是由于深度网络的初始化所导致时,我们没有机会让梯度下降优化器收敛。\n" ] }, { "cell_type": "code", "execution_count": 2, "id": "1bcc7a40", "metadata": { "execution": { "iopub.execute_input": "2023-08-18T07:03:34.476936Z", "iopub.status.busy": "2023-08-18T07:03:34.476128Z", "iopub.status.idle": "2023-08-18T07:03:34.491176Z", "shell.execute_reply": "2023-08-18T07:03:34.490214Z" }, "origin_pos": 7, "tab": [ "pytorch" ] }, "outputs": [ { "name": "stdout", "output_type": "stream", "text": [ "一个矩阵 \n", " tensor([[-0.7872, 2.7090, 0.5996, -1.3191],\n", " [-1.8260, -0.7130, -0.5521, 0.1051],\n", " [ 1.1213, 1.0472, -0.3991, -0.3802],\n", " [ 0.5552, 0.4517, -0.3218, 0.5214]])\n", "乘以100个矩阵后\n", " tensor([[-2.1897e+26, 8.8308e+26, 1.9813e+26, 1.7019e+26],\n", " [ 1.3110e+26, -5.2870e+26, -1.1862e+26, -1.0189e+26],\n", " [-1.6008e+26, 6.4559e+26, 1.4485e+26, 1.2442e+26],\n", " [ 3.0943e+25, -1.2479e+26, -2.7998e+25, -2.4050e+25]])\n" ] } ], "source": [ "M = torch.normal(0, 1, size=(4,4))\n", "print('一个矩阵 \\n',M)\n", "for i in range(100):\n", " M = torch.mm(M,torch.normal(0, 1, size=(4, 4)))\n", "\n", "print('乘以100个矩阵后\\n', M)" ] }, { "cell_type": "markdown", "id": "dc114830", "metadata": { "origin_pos": 10 }, "source": [ "### 打破对称性\n", "\n", "神经网络设计中的另一个问题是其参数化所固有的对称性。\n", "假设我们有一个简单的多层感知机,它有一个隐藏层和两个隐藏单元。\n", "在这种情况下,我们可以对第一层的权重$\\mathbf{W}^{(1)}$进行重排列,\n", "并且同样对输出层的权重进行重排列,可以获得相同的函数。\n", "第一个隐藏单元与第二个隐藏单元没有什么特别的区别。\n", "换句话说,我们在每一层的隐藏单元之间具有排列对称性。\n", "\n", "假设输出层将上述两个隐藏单元的多层感知机转换为仅一个输出单元。\n", "想象一下,如果我们将隐藏层的所有参数初始化为$\\mathbf{W}^{(1)} = c$,\n", "$c$为常量,会发生什么?\n", "在这种情况下,在前向传播期间,两个隐藏单元采用相同的输入和参数,\n", "产生相同的激活,该激活被送到输出单元。\n", "在反向传播期间,根据参数$\\mathbf{W}^{(1)}$对输出单元进行微分,\n", "得到一个梯度,其元素都取相同的值。\n", "因此,在基于梯度的迭代(例如,小批量随机梯度下降)之后,\n", "$\\mathbf{W}^{(1)}$的所有元素仍然采用相同的值。\n", "这样的迭代永远不会打破对称性,我们可能永远也无法实现网络的表达能力。\n", "隐藏层的行为就好像只有一个单元。\n", "请注意,虽然小批量随机梯度下降不会打破这种对称性,但暂退法正则化可以。\n", "\n", "## 参数初始化\n", "\n", "解决(或至少减轻)上述问题的一种方法是进行参数初始化,\n", "优化期间的注意和适当的正则化也可以进一步提高稳定性。\n", "\n", "### 默认初始化\n", "\n", "在前面的部分中,例如在 :numref:`sec_linear_concise`中,\n", "我们使用正态分布来初始化权重值。如果我们不指定初始化方法,\n", "框架将使用默认的随机初始化方法,对于中等难度的问题,这种方法通常很有效。\n", "\n", "### Xavier初始化\n", ":label:`subsec_xavier`\n", "\n", "让我们看看某些*没有非线性*的全连接层输出(例如,隐藏变量)$o_{i}$的尺度分布。\n", "对于该层$n_\\mathrm{in}$输入$x_j$及其相关权重$w_{ij}$,输出由下式给出\n", "\n", "$$o_{i} = \\sum_{j=1}^{n_\\mathrm{in}} w_{ij} x_j.$$\n", "\n", "权重$w_{ij}$都是从同一分布中独立抽取的。\n", "此外,让我们假设该分布具有零均值和方差$\\sigma^2$。\n", "请注意,这并不意味着分布必须是高斯的,只是均值和方差需要存在。\n", "现在,让我们假设层$x_j$的输入也具有零均值和方差$\\gamma^2$,\n", "并且它们独立于$w_{ij}$并且彼此独立。\n", "在这种情况下,我们可以按如下方式计算$o_i$的平均值和方差:\n", "\n", "$$\n", "\\begin{aligned}\n", " E[o_i] & = \\sum_{j=1}^{n_\\mathrm{in}} E[w_{ij} x_j] \\\\&= \\sum_{j=1}^{n_\\mathrm{in}} E[w_{ij}] E[x_j] \\\\&= 0, \\\\\n", " \\mathrm{Var}[o_i] & = E[o_i^2] - (E[o_i])^2 \\\\\n", " & = \\sum_{j=1}^{n_\\mathrm{in}} E[w^2_{ij} x^2_j] - 0 \\\\\n", " & = \\sum_{j=1}^{n_\\mathrm{in}} E[w^2_{ij}] E[x^2_j] \\\\\n", " & = n_\\mathrm{in} \\sigma^2 \\gamma^2.\n", "\\end{aligned}\n", "$$\n", "\n", "保持方差不变的一种方法是设置$n_\\mathrm{in} \\sigma^2 = 1$。\n", "现在考虑反向传播过程,我们面临着类似的问题,尽管梯度是从更靠近输出的层传播的。\n", "使用与前向传播相同的推断,我们可以看到,除非$n_\\mathrm{out} \\sigma^2 = 1$,\n", "否则梯度的方差可能会增大,其中$n_\\mathrm{out}$是该层的输出的数量。\n", "这使得我们进退两难:我们不可能同时满足这两个条件。\n", "相反,我们只需满足:\n", "\n", "$$\n", "\\begin{aligned}\n", "\\frac{1}{2} (n_\\mathrm{in} + n_\\mathrm{out}) \\sigma^2 = 1 \\text{ 或等价于 }\n", "\\sigma = \\sqrt{\\frac{2}{n_\\mathrm{in} + n_\\mathrm{out}}}.\n", "\\end{aligned}\n", "$$\n", "\n", "这就是现在标准且实用的*Xavier初始化*的基础,\n", "它以其提出者 :cite:`Glorot.Bengio.2010` 第一作者的名字命名。\n", "通常,Xavier初始化从均值为零,方差\n", "$\\sigma^2 = \\frac{2}{n_\\mathrm{in} + n_\\mathrm{out}}$\n", "的高斯分布中采样权重。\n", "我们也可以将其改为选择从均匀分布中抽取权重时的方差。\n", "注意均匀分布$U(-a, a)$的方差为$\\frac{a^2}{3}$。\n", "将$\\frac{a^2}{3}$代入到$\\sigma^2$的条件中,将得到初始化值域:\n", "\n", "$$U\\left(-\\sqrt{\\frac{6}{n_\\mathrm{in} + n_\\mathrm{out}}}, \\sqrt{\\frac{6}{n_\\mathrm{in} + n_\\mathrm{out}}}\\right).$$\n", "\n", "尽管在上述数学推理中,“不存在非线性”的假设在神经网络中很容易被违反,\n", "但Xavier初始化方法在实践中被证明是有效的。\n", "\n", "### 额外阅读\n", "\n", "上面的推理仅仅触及了现代参数初始化方法的皮毛。\n", "深度学习框架通常实现十几种不同的启发式方法。\n", "此外,参数初始化一直是深度学习基础研究的热点领域。\n", "其中包括专门用于参数绑定(共享)、超分辨率、序列模型和其他情况的启发式算法。\n", "例如,Xiao等人演示了通过使用精心设计的初始化方法\n", " :cite:`Xiao.Bahri.Sohl-Dickstein.ea.2018`,\n", "可以无须架构上的技巧而训练10000层神经网络的可能性。\n", "\n", "如果有读者对该主题感兴趣,我们建议深入研究本模块的内容,\n", "阅读提出并分析每种启发式方法的论文,然后探索有关该主题的最新出版物。\n", "也许会偶然发现甚至发明一个聪明的想法,并为深度学习框架提供一个实现。\n", "\n", "## 小结\n", "\n", "* 梯度消失和梯度爆炸是深度网络中常见的问题。在参数初始化时需要非常小心,以确保梯度和参数可以得到很好的控制。\n", "* 需要用启发式的初始化方法来确保初始梯度既不太大也不太小。\n", "* ReLU激活函数缓解了梯度消失问题,这样可以加速收敛。\n", "* 随机初始化是保证在进行优化前打破对称性的关键。\n", "* Xavier初始化表明,对于每一层,输出的方差不受输入数量的影响,任何梯度的方差不受输出数量的影响。\n", "\n", "## 练习\n", "\n", "1. 除了多层感知机的排列对称性之外,还能设计出其他神经网络可能会表现出对称性且需要被打破的情况吗?\n", "2. 我们是否可以将线性回归或softmax回归中的所有权重参数初始化为相同的值?\n", "3. 在相关资料中查找两个矩阵乘积特征值的解析界。这对确保梯度条件合适有什么启示?\n", "4. 如果我们知道某些项是发散的,我们能在事后修正吗?看看关于按层自适应速率缩放的论文 :cite:`You.Gitman.Ginsburg.2017` 。\n" ] }, { "cell_type": "markdown", "id": "bf1eb65c", "metadata": { "origin_pos": 12, "tab": [ "pytorch" ] }, "source": [ "[Discussions](https://discuss.d2l.ai/t/1818)\n" ] } ], "metadata": { "language_info": { "name": "python" }, "required_libs": [] }, "nbformat": 4, "nbformat_minor": 5 }