896 lines
24 KiB
Plaintext
896 lines
24 KiB
Plaintext
{
|
|
"cells": [
|
|
{
|
|
"cell_type": "markdown",
|
|
"id": "b05be39e",
|
|
"metadata": {
|
|
"origin_pos": 0
|
|
},
|
|
"source": [
|
|
"# 参数管理\n",
|
|
"\n",
|
|
"在选择了架构并设置了超参数后,我们就进入了训练阶段。\n",
|
|
"此时,我们的目标是找到使损失函数最小化的模型参数值。\n",
|
|
"经过训练后,我们将需要使用这些参数来做出未来的预测。\n",
|
|
"此外,有时我们希望提取参数,以便在其他环境中复用它们,\n",
|
|
"将模型保存下来,以便它可以在其他软件中执行,\n",
|
|
"或者为了获得科学的理解而进行检查。\n",
|
|
"\n",
|
|
"之前的介绍中,我们只依靠深度学习框架来完成训练的工作,\n",
|
|
"而忽略了操作参数的具体细节。\n",
|
|
"本节,我们将介绍以下内容:\n",
|
|
"\n",
|
|
"* 访问参数,用于调试、诊断和可视化;\n",
|
|
"* 参数初始化;\n",
|
|
"* 在不同模型组件间共享参数。\n",
|
|
"\n",
|
|
"(**我们首先看一下具有单隐藏层的多层感知机。**)\n"
|
|
]
|
|
},
|
|
{
|
|
"cell_type": "code",
|
|
"execution_count": 1,
|
|
"id": "ab7ef7a0",
|
|
"metadata": {
|
|
"execution": {
|
|
"iopub.execute_input": "2023-08-18T07:01:09.649068Z",
|
|
"iopub.status.busy": "2023-08-18T07:01:09.648305Z",
|
|
"iopub.status.idle": "2023-08-18T07:01:10.928992Z",
|
|
"shell.execute_reply": "2023-08-18T07:01:10.927959Z"
|
|
},
|
|
"origin_pos": 2,
|
|
"tab": [
|
|
"pytorch"
|
|
]
|
|
},
|
|
"outputs": [
|
|
{
|
|
"data": {
|
|
"text/plain": [
|
|
"tensor([[-0.0970],\n",
|
|
" [-0.0827]], grad_fn=<AddmmBackward0>)"
|
|
]
|
|
},
|
|
"execution_count": 1,
|
|
"metadata": {},
|
|
"output_type": "execute_result"
|
|
}
|
|
],
|
|
"source": [
|
|
"import torch\n",
|
|
"from torch import nn\n",
|
|
"\n",
|
|
"net = nn.Sequential(nn.Linear(4, 8), nn.ReLU(), nn.Linear(8, 1))\n",
|
|
"X = torch.rand(size=(2, 4))\n",
|
|
"net(X)"
|
|
]
|
|
},
|
|
{
|
|
"cell_type": "markdown",
|
|
"id": "fa004a12",
|
|
"metadata": {
|
|
"origin_pos": 5
|
|
},
|
|
"source": [
|
|
"## [**参数访问**]\n",
|
|
"\n",
|
|
"我们从已有模型中访问参数。\n",
|
|
"当通过`Sequential`类定义模型时,\n",
|
|
"我们可以通过索引来访问模型的任意层。\n",
|
|
"这就像模型是一个列表一样,每层的参数都在其属性中。\n",
|
|
"如下所示,我们可以检查第二个全连接层的参数。\n"
|
|
]
|
|
},
|
|
{
|
|
"cell_type": "code",
|
|
"execution_count": 2,
|
|
"id": "5e2fff9a",
|
|
"metadata": {
|
|
"execution": {
|
|
"iopub.execute_input": "2023-08-18T07:01:10.933865Z",
|
|
"iopub.status.busy": "2023-08-18T07:01:10.933267Z",
|
|
"iopub.status.idle": "2023-08-18T07:01:10.939922Z",
|
|
"shell.execute_reply": "2023-08-18T07:01:10.938931Z"
|
|
},
|
|
"origin_pos": 7,
|
|
"tab": [
|
|
"pytorch"
|
|
]
|
|
},
|
|
"outputs": [
|
|
{
|
|
"name": "stdout",
|
|
"output_type": "stream",
|
|
"text": [
|
|
"OrderedDict([('weight', tensor([[-0.0427, -0.2939, -0.1894, 0.0220, -0.1709, -0.1522, -0.0334, -0.2263]])), ('bias', tensor([0.0887]))])\n"
|
|
]
|
|
}
|
|
],
|
|
"source": [
|
|
"print(net[2].state_dict())"
|
|
]
|
|
},
|
|
{
|
|
"cell_type": "markdown",
|
|
"id": "b77c779c",
|
|
"metadata": {
|
|
"origin_pos": 9
|
|
},
|
|
"source": [
|
|
"输出的结果告诉我们一些重要的事情:\n",
|
|
"首先,这个全连接层包含两个参数,分别是该层的权重和偏置。\n",
|
|
"两者都存储为单精度浮点数(float32)。\n",
|
|
"注意,参数名称允许唯一标识每个参数,即使在包含数百个层的网络中也是如此。\n",
|
|
"\n",
|
|
"### [**目标参数**]\n",
|
|
"\n",
|
|
"注意,每个参数都表示为参数类的一个实例。\n",
|
|
"要对参数执行任何操作,首先我们需要访问底层的数值。\n",
|
|
"有几种方法可以做到这一点。有些比较简单,而另一些则比较通用。\n",
|
|
"下面的代码从第二个全连接层(即第三个神经网络层)提取偏置,\n",
|
|
"提取后返回的是一个参数类实例,并进一步访问该参数的值。\n"
|
|
]
|
|
},
|
|
{
|
|
"cell_type": "code",
|
|
"execution_count": 3,
|
|
"id": "d0682fff",
|
|
"metadata": {
|
|
"execution": {
|
|
"iopub.execute_input": "2023-08-18T07:01:10.945104Z",
|
|
"iopub.status.busy": "2023-08-18T07:01:10.944250Z",
|
|
"iopub.status.idle": "2023-08-18T07:01:10.951764Z",
|
|
"shell.execute_reply": "2023-08-18T07:01:10.950790Z"
|
|
},
|
|
"origin_pos": 11,
|
|
"tab": [
|
|
"pytorch"
|
|
]
|
|
},
|
|
"outputs": [
|
|
{
|
|
"name": "stdout",
|
|
"output_type": "stream",
|
|
"text": [
|
|
"<class 'torch.nn.parameter.Parameter'>\n",
|
|
"Parameter containing:\n",
|
|
"tensor([0.0887], requires_grad=True)\n",
|
|
"tensor([0.0887])\n"
|
|
]
|
|
}
|
|
],
|
|
"source": [
|
|
"print(type(net[2].bias))\n",
|
|
"print(net[2].bias)\n",
|
|
"print(net[2].bias.data)"
|
|
]
|
|
},
|
|
{
|
|
"cell_type": "markdown",
|
|
"id": "b90565b1",
|
|
"metadata": {
|
|
"origin_pos": 14,
|
|
"tab": [
|
|
"pytorch"
|
|
]
|
|
},
|
|
"source": [
|
|
"参数是复合的对象,包含值、梯度和额外信息。\n",
|
|
"这就是我们需要显式参数值的原因。\n",
|
|
"除了值之外,我们还可以访问每个参数的梯度。\n",
|
|
"在上面这个网络中,由于我们还没有调用反向传播,所以参数的梯度处于初始状态。\n"
|
|
]
|
|
},
|
|
{
|
|
"cell_type": "code",
|
|
"execution_count": 4,
|
|
"id": "3cf4d55b",
|
|
"metadata": {
|
|
"execution": {
|
|
"iopub.execute_input": "2023-08-18T07:01:10.956378Z",
|
|
"iopub.status.busy": "2023-08-18T07:01:10.955542Z",
|
|
"iopub.status.idle": "2023-08-18T07:01:10.961810Z",
|
|
"shell.execute_reply": "2023-08-18T07:01:10.960767Z"
|
|
},
|
|
"origin_pos": 16,
|
|
"tab": [
|
|
"pytorch"
|
|
]
|
|
},
|
|
"outputs": [
|
|
{
|
|
"data": {
|
|
"text/plain": [
|
|
"True"
|
|
]
|
|
},
|
|
"execution_count": 4,
|
|
"metadata": {},
|
|
"output_type": "execute_result"
|
|
}
|
|
],
|
|
"source": [
|
|
"net[2].weight.grad == None"
|
|
]
|
|
},
|
|
{
|
|
"cell_type": "markdown",
|
|
"id": "01e647c1",
|
|
"metadata": {
|
|
"origin_pos": 17
|
|
},
|
|
"source": [
|
|
"### [**一次性访问所有参数**]\n",
|
|
"\n",
|
|
"当我们需要对所有参数执行操作时,逐个访问它们可能会很麻烦。\n",
|
|
"当我们处理更复杂的块(例如,嵌套块)时,情况可能会变得特别复杂,\n",
|
|
"因为我们需要递归整个树来提取每个子块的参数。\n",
|
|
"下面,我们将通过演示来比较访问第一个全连接层的参数和访问所有层。\n"
|
|
]
|
|
},
|
|
{
|
|
"cell_type": "code",
|
|
"execution_count": 5,
|
|
"id": "916939ce",
|
|
"metadata": {
|
|
"execution": {
|
|
"iopub.execute_input": "2023-08-18T07:01:10.966725Z",
|
|
"iopub.status.busy": "2023-08-18T07:01:10.965969Z",
|
|
"iopub.status.idle": "2023-08-18T07:01:10.972600Z",
|
|
"shell.execute_reply": "2023-08-18T07:01:10.971655Z"
|
|
},
|
|
"origin_pos": 19,
|
|
"tab": [
|
|
"pytorch"
|
|
]
|
|
},
|
|
"outputs": [
|
|
{
|
|
"name": "stdout",
|
|
"output_type": "stream",
|
|
"text": [
|
|
"('weight', torch.Size([8, 4])) ('bias', torch.Size([8]))\n",
|
|
"('0.weight', torch.Size([8, 4])) ('0.bias', torch.Size([8])) ('2.weight', torch.Size([1, 8])) ('2.bias', torch.Size([1]))\n"
|
|
]
|
|
}
|
|
],
|
|
"source": [
|
|
"print(*[(name, param.shape) for name, param in net[0].named_parameters()])\n",
|
|
"print(*[(name, param.shape) for name, param in net.named_parameters()])"
|
|
]
|
|
},
|
|
{
|
|
"cell_type": "markdown",
|
|
"id": "c9cc1e2f",
|
|
"metadata": {
|
|
"origin_pos": 21
|
|
},
|
|
"source": [
|
|
"这为我们提供了另一种访问网络参数的方式,如下所示。\n"
|
|
]
|
|
},
|
|
{
|
|
"cell_type": "code",
|
|
"execution_count": 6,
|
|
"id": "116207ef",
|
|
"metadata": {
|
|
"execution": {
|
|
"iopub.execute_input": "2023-08-18T07:01:10.977269Z",
|
|
"iopub.status.busy": "2023-08-18T07:01:10.976623Z",
|
|
"iopub.status.idle": "2023-08-18T07:01:10.983222Z",
|
|
"shell.execute_reply": "2023-08-18T07:01:10.982309Z"
|
|
},
|
|
"origin_pos": 23,
|
|
"tab": [
|
|
"pytorch"
|
|
]
|
|
},
|
|
"outputs": [
|
|
{
|
|
"data": {
|
|
"text/plain": [
|
|
"tensor([0.0887])"
|
|
]
|
|
},
|
|
"execution_count": 6,
|
|
"metadata": {},
|
|
"output_type": "execute_result"
|
|
}
|
|
],
|
|
"source": [
|
|
"net.state_dict()['2.bias'].data"
|
|
]
|
|
},
|
|
{
|
|
"cell_type": "markdown",
|
|
"id": "f2ae2721",
|
|
"metadata": {
|
|
"origin_pos": 26
|
|
},
|
|
"source": [
|
|
"### [**从嵌套块收集参数**]\n",
|
|
"\n",
|
|
"让我们看看,如果我们将多个块相互嵌套,参数命名约定是如何工作的。\n",
|
|
"我们首先定义一个生成块的函数(可以说是“块工厂”),然后将这些块组合到更大的块中。\n"
|
|
]
|
|
},
|
|
{
|
|
"cell_type": "code",
|
|
"execution_count": 7,
|
|
"id": "712e31fd",
|
|
"metadata": {
|
|
"execution": {
|
|
"iopub.execute_input": "2023-08-18T07:01:10.988088Z",
|
|
"iopub.status.busy": "2023-08-18T07:01:10.987352Z",
|
|
"iopub.status.idle": "2023-08-18T07:01:10.998245Z",
|
|
"shell.execute_reply": "2023-08-18T07:01:10.997197Z"
|
|
},
|
|
"origin_pos": 28,
|
|
"tab": [
|
|
"pytorch"
|
|
]
|
|
},
|
|
"outputs": [
|
|
{
|
|
"data": {
|
|
"text/plain": [
|
|
"tensor([[0.2596],\n",
|
|
" [0.2596]], grad_fn=<AddmmBackward0>)"
|
|
]
|
|
},
|
|
"execution_count": 7,
|
|
"metadata": {},
|
|
"output_type": "execute_result"
|
|
}
|
|
],
|
|
"source": [
|
|
"def block1():\n",
|
|
" return nn.Sequential(nn.Linear(4, 8), nn.ReLU(),\n",
|
|
" nn.Linear(8, 4), nn.ReLU())\n",
|
|
"\n",
|
|
"def block2():\n",
|
|
" net = nn.Sequential()\n",
|
|
" for i in range(4):\n",
|
|
" # 在这里嵌套\n",
|
|
" net.add_module(f'block {i}', block1())\n",
|
|
" return net\n",
|
|
"\n",
|
|
"rgnet = nn.Sequential(block2(), nn.Linear(4, 1))\n",
|
|
"rgnet(X)"
|
|
]
|
|
},
|
|
{
|
|
"cell_type": "markdown",
|
|
"id": "ac9958fb",
|
|
"metadata": {
|
|
"origin_pos": 31
|
|
},
|
|
"source": [
|
|
"[**设计了网络后,我们看看它是如何工作的。**]\n"
|
|
]
|
|
},
|
|
{
|
|
"cell_type": "code",
|
|
"execution_count": 8,
|
|
"id": "c7d7717d",
|
|
"metadata": {
|
|
"execution": {
|
|
"iopub.execute_input": "2023-08-18T07:01:11.002889Z",
|
|
"iopub.status.busy": "2023-08-18T07:01:11.002264Z",
|
|
"iopub.status.idle": "2023-08-18T07:01:11.007643Z",
|
|
"shell.execute_reply": "2023-08-18T07:01:11.006464Z"
|
|
},
|
|
"origin_pos": 33,
|
|
"tab": [
|
|
"pytorch"
|
|
]
|
|
},
|
|
"outputs": [
|
|
{
|
|
"name": "stdout",
|
|
"output_type": "stream",
|
|
"text": [
|
|
"Sequential(\n",
|
|
" (0): Sequential(\n",
|
|
" (block 0): Sequential(\n",
|
|
" (0): Linear(in_features=4, out_features=8, bias=True)\n",
|
|
" (1): ReLU()\n",
|
|
" (2): Linear(in_features=8, out_features=4, bias=True)\n",
|
|
" (3): ReLU()\n",
|
|
" )\n",
|
|
" (block 1): Sequential(\n",
|
|
" (0): Linear(in_features=4, out_features=8, bias=True)\n",
|
|
" (1): ReLU()\n",
|
|
" (2): Linear(in_features=8, out_features=4, bias=True)\n",
|
|
" (3): ReLU()\n",
|
|
" )\n",
|
|
" (block 2): Sequential(\n",
|
|
" (0): Linear(in_features=4, out_features=8, bias=True)\n",
|
|
" (1): ReLU()\n",
|
|
" (2): Linear(in_features=8, out_features=4, bias=True)\n",
|
|
" (3): ReLU()\n",
|
|
" )\n",
|
|
" (block 3): Sequential(\n",
|
|
" (0): Linear(in_features=4, out_features=8, bias=True)\n",
|
|
" (1): ReLU()\n",
|
|
" (2): Linear(in_features=8, out_features=4, bias=True)\n",
|
|
" (3): ReLU()\n",
|
|
" )\n",
|
|
" )\n",
|
|
" (1): Linear(in_features=4, out_features=1, bias=True)\n",
|
|
")\n"
|
|
]
|
|
}
|
|
],
|
|
"source": [
|
|
"print(rgnet)"
|
|
]
|
|
},
|
|
{
|
|
"cell_type": "markdown",
|
|
"id": "1c49f699",
|
|
"metadata": {
|
|
"origin_pos": 35
|
|
},
|
|
"source": [
|
|
"因为层是分层嵌套的,所以我们也可以像通过嵌套列表索引一样访问它们。\n",
|
|
"下面,我们访问第一个主要的块中、第二个子块的第一层的偏置项。\n"
|
|
]
|
|
},
|
|
{
|
|
"cell_type": "code",
|
|
"execution_count": 9,
|
|
"id": "939ba4d3",
|
|
"metadata": {
|
|
"execution": {
|
|
"iopub.execute_input": "2023-08-18T07:01:11.012522Z",
|
|
"iopub.status.busy": "2023-08-18T07:01:11.011839Z",
|
|
"iopub.status.idle": "2023-08-18T07:01:11.018508Z",
|
|
"shell.execute_reply": "2023-08-18T07:01:11.017590Z"
|
|
},
|
|
"origin_pos": 37,
|
|
"tab": [
|
|
"pytorch"
|
|
]
|
|
},
|
|
"outputs": [
|
|
{
|
|
"data": {
|
|
"text/plain": [
|
|
"tensor([ 0.1999, -0.4073, -0.1200, -0.2033, -0.1573, 0.3546, -0.2141, -0.2483])"
|
|
]
|
|
},
|
|
"execution_count": 9,
|
|
"metadata": {},
|
|
"output_type": "execute_result"
|
|
}
|
|
],
|
|
"source": [
|
|
"rgnet[0][1][0].bias.data"
|
|
]
|
|
},
|
|
{
|
|
"cell_type": "markdown",
|
|
"id": "0383b6a9",
|
|
"metadata": {
|
|
"origin_pos": 40
|
|
},
|
|
"source": [
|
|
"## 参数初始化\n",
|
|
"\n",
|
|
"知道了如何访问参数后,现在我们看看如何正确地初始化参数。\n",
|
|
"我们在 :numref:`sec_numerical_stability`中讨论了良好初始化的必要性。\n",
|
|
"深度学习框架提供默认随机初始化,\n",
|
|
"也允许我们创建自定义初始化方法,\n",
|
|
"满足我们通过其他规则实现初始化权重。\n"
|
|
]
|
|
},
|
|
{
|
|
"cell_type": "markdown",
|
|
"id": "0418f044",
|
|
"metadata": {
|
|
"origin_pos": 42,
|
|
"tab": [
|
|
"pytorch"
|
|
]
|
|
},
|
|
"source": [
|
|
"默认情况下,PyTorch会根据一个范围均匀地初始化权重和偏置矩阵,\n",
|
|
"这个范围是根据输入和输出维度计算出的。\n",
|
|
"PyTorch的`nn.init`模块提供了多种预置初始化方法。\n"
|
|
]
|
|
},
|
|
{
|
|
"cell_type": "markdown",
|
|
"id": "0b0b932a",
|
|
"metadata": {
|
|
"origin_pos": 45
|
|
},
|
|
"source": [
|
|
"### [**内置初始化**]\n",
|
|
"\n",
|
|
"让我们首先调用内置的初始化器。\n",
|
|
"下面的代码将所有权重参数初始化为标准差为0.01的高斯随机变量,\n",
|
|
"且将偏置参数设置为0。\n"
|
|
]
|
|
},
|
|
{
|
|
"cell_type": "code",
|
|
"execution_count": 10,
|
|
"id": "2f00d5e7",
|
|
"metadata": {
|
|
"execution": {
|
|
"iopub.execute_input": "2023-08-18T07:01:11.023955Z",
|
|
"iopub.status.busy": "2023-08-18T07:01:11.023046Z",
|
|
"iopub.status.idle": "2023-08-18T07:01:11.033287Z",
|
|
"shell.execute_reply": "2023-08-18T07:01:11.032096Z"
|
|
},
|
|
"origin_pos": 47,
|
|
"tab": [
|
|
"pytorch"
|
|
]
|
|
},
|
|
"outputs": [
|
|
{
|
|
"data": {
|
|
"text/plain": [
|
|
"(tensor([-0.0214, -0.0015, -0.0100, -0.0058]), tensor(0.))"
|
|
]
|
|
},
|
|
"execution_count": 10,
|
|
"metadata": {},
|
|
"output_type": "execute_result"
|
|
}
|
|
],
|
|
"source": [
|
|
"def init_normal(m):\n",
|
|
" if type(m) == nn.Linear:\n",
|
|
" nn.init.normal_(m.weight, mean=0, std=0.01)\n",
|
|
" nn.init.zeros_(m.bias)\n",
|
|
"net.apply(init_normal)\n",
|
|
"net[0].weight.data[0], net[0].bias.data[0]"
|
|
]
|
|
},
|
|
{
|
|
"cell_type": "markdown",
|
|
"id": "753e540b",
|
|
"metadata": {
|
|
"origin_pos": 50
|
|
},
|
|
"source": [
|
|
"我们还可以将所有参数初始化为给定的常数,比如初始化为1。\n"
|
|
]
|
|
},
|
|
{
|
|
"cell_type": "code",
|
|
"execution_count": 11,
|
|
"id": "49ee306c",
|
|
"metadata": {
|
|
"execution": {
|
|
"iopub.execute_input": "2023-08-18T07:01:11.038321Z",
|
|
"iopub.status.busy": "2023-08-18T07:01:11.037607Z",
|
|
"iopub.status.idle": "2023-08-18T07:01:11.049009Z",
|
|
"shell.execute_reply": "2023-08-18T07:01:11.047793Z"
|
|
},
|
|
"origin_pos": 52,
|
|
"tab": [
|
|
"pytorch"
|
|
]
|
|
},
|
|
"outputs": [
|
|
{
|
|
"data": {
|
|
"text/plain": [
|
|
"(tensor([1., 1., 1., 1.]), tensor(0.))"
|
|
]
|
|
},
|
|
"execution_count": 11,
|
|
"metadata": {},
|
|
"output_type": "execute_result"
|
|
}
|
|
],
|
|
"source": [
|
|
"def init_constant(m):\n",
|
|
" if type(m) == nn.Linear:\n",
|
|
" nn.init.constant_(m.weight, 1)\n",
|
|
" nn.init.zeros_(m.bias)\n",
|
|
"net.apply(init_constant)\n",
|
|
"net[0].weight.data[0], net[0].bias.data[0]"
|
|
]
|
|
},
|
|
{
|
|
"cell_type": "markdown",
|
|
"id": "e086279d",
|
|
"metadata": {
|
|
"origin_pos": 55
|
|
},
|
|
"source": [
|
|
"我们还可以[**对某些块应用不同的初始化方法**]。\n",
|
|
"例如,下面我们使用Xavier初始化方法初始化第一个神经网络层,\n",
|
|
"然后将第三个神经网络层初始化为常量值42。\n"
|
|
]
|
|
},
|
|
{
|
|
"cell_type": "code",
|
|
"execution_count": 12,
|
|
"id": "1a90ffaa",
|
|
"metadata": {
|
|
"execution": {
|
|
"iopub.execute_input": "2023-08-18T07:01:11.054335Z",
|
|
"iopub.status.busy": "2023-08-18T07:01:11.053550Z",
|
|
"iopub.status.idle": "2023-08-18T07:01:11.063215Z",
|
|
"shell.execute_reply": "2023-08-18T07:01:11.062244Z"
|
|
},
|
|
"origin_pos": 57,
|
|
"tab": [
|
|
"pytorch"
|
|
]
|
|
},
|
|
"outputs": [
|
|
{
|
|
"name": "stdout",
|
|
"output_type": "stream",
|
|
"text": [
|
|
"tensor([ 0.5236, 0.0516, -0.3236, 0.3794])\n",
|
|
"tensor([[42., 42., 42., 42., 42., 42., 42., 42.]])\n"
|
|
]
|
|
}
|
|
],
|
|
"source": [
|
|
"def init_xavier(m):\n",
|
|
" if type(m) == nn.Linear:\n",
|
|
" nn.init.xavier_uniform_(m.weight)\n",
|
|
"def init_42(m):\n",
|
|
" if type(m) == nn.Linear:\n",
|
|
" nn.init.constant_(m.weight, 42)\n",
|
|
"\n",
|
|
"net[0].apply(init_xavier)\n",
|
|
"net[2].apply(init_42)\n",
|
|
"print(net[0].weight.data[0])\n",
|
|
"print(net[2].weight.data)"
|
|
]
|
|
},
|
|
{
|
|
"cell_type": "markdown",
|
|
"id": "581dcade",
|
|
"metadata": {
|
|
"origin_pos": 60
|
|
},
|
|
"source": [
|
|
"### [**自定义初始化**]\n",
|
|
"\n",
|
|
"有时,深度学习框架没有提供我们需要的初始化方法。\n",
|
|
"在下面的例子中,我们使用以下的分布为任意权重参数$w$定义初始化方法:\n",
|
|
"\n",
|
|
"$$\n",
|
|
"\\begin{aligned}\n",
|
|
" w \\sim \\begin{cases}\n",
|
|
" U(5, 10) & \\text{ 可能性 } \\frac{1}{4} \\\\\n",
|
|
" 0 & \\text{ 可能性 } \\frac{1}{2} \\\\\n",
|
|
" U(-10, -5) & \\text{ 可能性 } \\frac{1}{4}\n",
|
|
" \\end{cases}\n",
|
|
"\\end{aligned}\n",
|
|
"$$\n"
|
|
]
|
|
},
|
|
{
|
|
"cell_type": "markdown",
|
|
"id": "12502b7c",
|
|
"metadata": {
|
|
"origin_pos": 62,
|
|
"tab": [
|
|
"pytorch"
|
|
]
|
|
},
|
|
"source": [
|
|
"同样,我们实现了一个`my_init`函数来应用到`net`。\n"
|
|
]
|
|
},
|
|
{
|
|
"cell_type": "code",
|
|
"execution_count": 13,
|
|
"id": "9166f6e3",
|
|
"metadata": {
|
|
"execution": {
|
|
"iopub.execute_input": "2023-08-18T07:01:11.068164Z",
|
|
"iopub.status.busy": "2023-08-18T07:01:11.067460Z",
|
|
"iopub.status.idle": "2023-08-18T07:01:11.079228Z",
|
|
"shell.execute_reply": "2023-08-18T07:01:11.078069Z"
|
|
},
|
|
"origin_pos": 66,
|
|
"tab": [
|
|
"pytorch"
|
|
]
|
|
},
|
|
"outputs": [
|
|
{
|
|
"name": "stdout",
|
|
"output_type": "stream",
|
|
"text": [
|
|
"Init weight torch.Size([8, 4])\n",
|
|
"Init weight torch.Size([1, 8])\n"
|
|
]
|
|
},
|
|
{
|
|
"data": {
|
|
"text/plain": [
|
|
"tensor([[5.4079, 9.3334, 5.0616, 8.3095],\n",
|
|
" [0.0000, 7.2788, -0.0000, -0.0000]], grad_fn=<SliceBackward0>)"
|
|
]
|
|
},
|
|
"execution_count": 13,
|
|
"metadata": {},
|
|
"output_type": "execute_result"
|
|
}
|
|
],
|
|
"source": [
|
|
"def my_init(m):\n",
|
|
" if type(m) == nn.Linear:\n",
|
|
" print(\"Init\", *[(name, param.shape)\n",
|
|
" for name, param in m.named_parameters()][0])\n",
|
|
" nn.init.uniform_(m.weight, -10, 10)\n",
|
|
" m.weight.data *= m.weight.data.abs() >= 5\n",
|
|
"\n",
|
|
"net.apply(my_init)\n",
|
|
"net[0].weight[:2]"
|
|
]
|
|
},
|
|
{
|
|
"cell_type": "markdown",
|
|
"id": "030a52c5",
|
|
"metadata": {
|
|
"origin_pos": 69
|
|
},
|
|
"source": [
|
|
"注意,我们始终可以直接设置参数。\n"
|
|
]
|
|
},
|
|
{
|
|
"cell_type": "code",
|
|
"execution_count": 14,
|
|
"id": "5b9af1f8",
|
|
"metadata": {
|
|
"execution": {
|
|
"iopub.execute_input": "2023-08-18T07:01:11.084158Z",
|
|
"iopub.status.busy": "2023-08-18T07:01:11.083416Z",
|
|
"iopub.status.idle": "2023-08-18T07:01:11.092672Z",
|
|
"shell.execute_reply": "2023-08-18T07:01:11.091537Z"
|
|
},
|
|
"origin_pos": 71,
|
|
"tab": [
|
|
"pytorch"
|
|
]
|
|
},
|
|
"outputs": [
|
|
{
|
|
"data": {
|
|
"text/plain": [
|
|
"tensor([42.0000, 10.3334, 6.0616, 9.3095])"
|
|
]
|
|
},
|
|
"execution_count": 14,
|
|
"metadata": {},
|
|
"output_type": "execute_result"
|
|
}
|
|
],
|
|
"source": [
|
|
"net[0].weight.data[:] += 1\n",
|
|
"net[0].weight.data[0, 0] = 42\n",
|
|
"net[0].weight.data[0]"
|
|
]
|
|
},
|
|
{
|
|
"cell_type": "markdown",
|
|
"id": "a4144ff7",
|
|
"metadata": {
|
|
"origin_pos": 75
|
|
},
|
|
"source": [
|
|
"## [**参数绑定**]\n",
|
|
"\n",
|
|
"有时我们希望在多个层间共享参数:\n",
|
|
"我们可以定义一个稠密层,然后使用它的参数来设置另一个层的参数。\n"
|
|
]
|
|
},
|
|
{
|
|
"cell_type": "code",
|
|
"execution_count": 15,
|
|
"id": "69660fa7",
|
|
"metadata": {
|
|
"execution": {
|
|
"iopub.execute_input": "2023-08-18T07:01:11.097767Z",
|
|
"iopub.status.busy": "2023-08-18T07:01:11.096948Z",
|
|
"iopub.status.idle": "2023-08-18T07:01:11.108904Z",
|
|
"shell.execute_reply": "2023-08-18T07:01:11.107763Z"
|
|
},
|
|
"origin_pos": 77,
|
|
"tab": [
|
|
"pytorch"
|
|
]
|
|
},
|
|
"outputs": [
|
|
{
|
|
"name": "stdout",
|
|
"output_type": "stream",
|
|
"text": [
|
|
"tensor([True, True, True, True, True, True, True, True])\n",
|
|
"tensor([True, True, True, True, True, True, True, True])\n"
|
|
]
|
|
}
|
|
],
|
|
"source": [
|
|
"# 我们需要给共享层一个名称,以便可以引用它的参数\n",
|
|
"shared = nn.Linear(8, 8)\n",
|
|
"net = nn.Sequential(nn.Linear(4, 8), nn.ReLU(),\n",
|
|
" shared, nn.ReLU(),\n",
|
|
" shared, nn.ReLU(),\n",
|
|
" nn.Linear(8, 1))\n",
|
|
"net(X)\n",
|
|
"# 检查参数是否相同\n",
|
|
"print(net[2].weight.data[0] == net[4].weight.data[0])\n",
|
|
"net[2].weight.data[0, 0] = 100\n",
|
|
"# 确保它们实际上是同一个对象,而不只是有相同的值\n",
|
|
"print(net[2].weight.data[0] == net[4].weight.data[0])"
|
|
]
|
|
},
|
|
{
|
|
"cell_type": "markdown",
|
|
"id": "81dc2c3c",
|
|
"metadata": {
|
|
"origin_pos": 81,
|
|
"tab": [
|
|
"pytorch"
|
|
]
|
|
},
|
|
"source": [
|
|
"这个例子表明第三个和第五个神经网络层的参数是绑定的。\n",
|
|
"它们不仅值相等,而且由相同的张量表示。\n",
|
|
"因此,如果我们改变其中一个参数,另一个参数也会改变。\n",
|
|
"这里有一个问题:当参数绑定时,梯度会发生什么情况?\n",
|
|
"答案是由于模型参数包含梯度,因此在反向传播期间第二个隐藏层\n",
|
|
"(即第三个神经网络层)和第三个隐藏层(即第五个神经网络层)的梯度会加在一起。\n"
|
|
]
|
|
},
|
|
{
|
|
"cell_type": "markdown",
|
|
"id": "ef8e6259",
|
|
"metadata": {
|
|
"origin_pos": 82
|
|
},
|
|
"source": [
|
|
"## 小结\n",
|
|
"\n",
|
|
"* 我们有几种方法可以访问、初始化和绑定模型参数。\n",
|
|
"* 我们可以使用自定义初始化方法。\n",
|
|
"\n",
|
|
"## 练习\n",
|
|
"\n",
|
|
"1. 使用 :numref:`sec_model_construction` 中定义的`FancyMLP`模型,访问各个层的参数。\n",
|
|
"1. 查看初始化模块文档以了解不同的初始化方法。\n",
|
|
"1. 构建包含共享参数层的多层感知机并对其进行训练。在训练过程中,观察模型各层的参数和梯度。\n",
|
|
"1. 为什么共享参数是个好主意?\n"
|
|
]
|
|
},
|
|
{
|
|
"cell_type": "markdown",
|
|
"id": "ead65cf9",
|
|
"metadata": {
|
|
"origin_pos": 84,
|
|
"tab": [
|
|
"pytorch"
|
|
]
|
|
},
|
|
"source": [
|
|
"[Discussions](https://discuss.d2l.ai/t/1829)\n"
|
|
]
|
|
}
|
|
],
|
|
"metadata": {
|
|
"language_info": {
|
|
"name": "python"
|
|
},
|
|
"required_libs": []
|
|
},
|
|
"nbformat": 4,
|
|
"nbformat_minor": 5
|
|
} |