609 lines
17 KiB
Plaintext
609 lines
17 KiB
Plaintext
{
|
||
"cells": [
|
||
{
|
||
"cell_type": "markdown",
|
||
"id": "3e211967",
|
||
"metadata": {
|
||
"origin_pos": 0
|
||
},
|
||
"source": [
|
||
"# 线性回归的简洁实现\n",
|
||
":label:`sec_linear_concise`\n",
|
||
"\n",
|
||
"在过去的几年里,出于对深度学习强烈的兴趣,\n",
|
||
"许多公司、学者和业余爱好者开发了各种成熟的开源框架。\n",
|
||
"这些框架可以自动化基于梯度的学习算法中重复性的工作。\n",
|
||
"在 :numref:`sec_linear_scratch`中,我们只运用了:\n",
|
||
"(1)通过张量来进行数据存储和线性代数;\n",
|
||
"(2)通过自动微分来计算梯度。\n",
|
||
"实际上,由于数据迭代器、损失函数、优化器和神经网络层很常用,\n",
|
||
"现代深度学习库也为我们实现了这些组件。\n",
|
||
"\n",
|
||
"本节将介绍如何(**通过使用深度学习框架来简洁地实现**)\n",
|
||
" :numref:`sec_linear_scratch`中的(**线性回归模型**)。\n",
|
||
"\n",
|
||
"## 生成数据集\n",
|
||
"\n",
|
||
"与 :numref:`sec_linear_scratch`中类似,我们首先[**生成数据集**]。\n"
|
||
]
|
||
},
|
||
{
|
||
"cell_type": "code",
|
||
"execution_count": 1,
|
||
"id": "5c88734d",
|
||
"metadata": {
|
||
"execution": {
|
||
"iopub.execute_input": "2023-08-18T07:01:52.522009Z",
|
||
"iopub.status.busy": "2023-08-18T07:01:52.521295Z",
|
||
"iopub.status.idle": "2023-08-18T07:01:54.610713Z",
|
||
"shell.execute_reply": "2023-08-18T07:01:54.609677Z"
|
||
},
|
||
"origin_pos": 2,
|
||
"tab": [
|
||
"pytorch"
|
||
]
|
||
},
|
||
"outputs": [],
|
||
"source": [
|
||
"import numpy as np\n",
|
||
"import torch\n",
|
||
"from torch.utils import data\n",
|
||
"from d2l import torch as d2l"
|
||
]
|
||
},
|
||
{
|
||
"cell_type": "code",
|
||
"execution_count": 2,
|
||
"id": "c26b741f",
|
||
"metadata": {
|
||
"execution": {
|
||
"iopub.execute_input": "2023-08-18T07:01:54.616404Z",
|
||
"iopub.status.busy": "2023-08-18T07:01:54.615685Z",
|
||
"iopub.status.idle": "2023-08-18T07:01:54.643472Z",
|
||
"shell.execute_reply": "2023-08-18T07:01:54.642512Z"
|
||
},
|
||
"origin_pos": 5,
|
||
"tab": [
|
||
"pytorch"
|
||
]
|
||
},
|
||
"outputs": [],
|
||
"source": [
|
||
"true_w = torch.tensor([2, -3.4])\n",
|
||
"true_b = 4.2\n",
|
||
"features, labels = d2l.synthetic_data(true_w, true_b, 1000)"
|
||
]
|
||
},
|
||
{
|
||
"cell_type": "markdown",
|
||
"id": "e6fd8db7",
|
||
"metadata": {
|
||
"origin_pos": 6
|
||
},
|
||
"source": [
|
||
"## 读取数据集\n",
|
||
"\n",
|
||
"我们可以[**调用框架中现有的API来读取数据**]。\n",
|
||
"我们将`features`和`labels`作为API的参数传递,并通过数据迭代器指定`batch_size`。\n",
|
||
"此外,布尔值`is_train`表示是否希望数据迭代器对象在每个迭代周期内打乱数据。\n"
|
||
]
|
||
},
|
||
{
|
||
"cell_type": "code",
|
||
"execution_count": 3,
|
||
"id": "955f5cc0",
|
||
"metadata": {
|
||
"execution": {
|
||
"iopub.execute_input": "2023-08-18T07:01:54.648232Z",
|
||
"iopub.status.busy": "2023-08-18T07:01:54.647744Z",
|
||
"iopub.status.idle": "2023-08-18T07:01:54.653335Z",
|
||
"shell.execute_reply": "2023-08-18T07:01:54.652317Z"
|
||
},
|
||
"origin_pos": 8,
|
||
"tab": [
|
||
"pytorch"
|
||
]
|
||
},
|
||
"outputs": [],
|
||
"source": [
|
||
"def load_array(data_arrays, batch_size, is_train=True): #@save\n",
|
||
" \"\"\"构造一个PyTorch数据迭代器\"\"\"\n",
|
||
" dataset = data.TensorDataset(*data_arrays)\n",
|
||
" return data.DataLoader(dataset, batch_size, shuffle=is_train)"
|
||
]
|
||
},
|
||
{
|
||
"cell_type": "code",
|
||
"execution_count": 4,
|
||
"id": "c041eafa",
|
||
"metadata": {
|
||
"execution": {
|
||
"iopub.execute_input": "2023-08-18T07:01:54.657592Z",
|
||
"iopub.status.busy": "2023-08-18T07:01:54.656999Z",
|
||
"iopub.status.idle": "2023-08-18T07:01:54.661787Z",
|
||
"shell.execute_reply": "2023-08-18T07:01:54.660785Z"
|
||
},
|
||
"origin_pos": 11,
|
||
"tab": [
|
||
"pytorch"
|
||
]
|
||
},
|
||
"outputs": [],
|
||
"source": [
|
||
"batch_size = 10\n",
|
||
"data_iter = load_array((features, labels), batch_size)"
|
||
]
|
||
},
|
||
{
|
||
"cell_type": "markdown",
|
||
"id": "503e6815",
|
||
"metadata": {
|
||
"origin_pos": 12
|
||
},
|
||
"source": [
|
||
"使用`data_iter`的方式与我们在 :numref:`sec_linear_scratch`中使用`data_iter`函数的方式相同。为了验证是否正常工作,让我们读取并打印第一个小批量样本。\n",
|
||
"与 :numref:`sec_linear_scratch`不同,这里我们使用`iter`构造Python迭代器,并使用`next`从迭代器中获取第一项。\n"
|
||
]
|
||
},
|
||
{
|
||
"cell_type": "code",
|
||
"execution_count": 5,
|
||
"id": "7c6919b8",
|
||
"metadata": {
|
||
"execution": {
|
||
"iopub.execute_input": "2023-08-18T07:01:54.665574Z",
|
||
"iopub.status.busy": "2023-08-18T07:01:54.664999Z",
|
||
"iopub.status.idle": "2023-08-18T07:01:54.673523Z",
|
||
"shell.execute_reply": "2023-08-18T07:01:54.672688Z"
|
||
},
|
||
"origin_pos": 13,
|
||
"tab": [
|
||
"pytorch"
|
||
]
|
||
},
|
||
"outputs": [
|
||
{
|
||
"data": {
|
||
"text/plain": [
|
||
"[tensor([[-1.3116, -0.3062],\n",
|
||
" [-1.5653, 0.4830],\n",
|
||
" [-0.8893, -0.9466],\n",
|
||
" [-1.2417, 1.6891],\n",
|
||
" [-0.7148, 0.1376],\n",
|
||
" [-0.2162, -0.6122],\n",
|
||
" [ 2.4048, -0.3211],\n",
|
||
" [-0.1516, 0.4997],\n",
|
||
" [ 1.5298, -0.2291],\n",
|
||
" [ 1.3895, 1.2602]]),\n",
|
||
" tensor([[ 2.6073],\n",
|
||
" [-0.5787],\n",
|
||
" [ 5.6339],\n",
|
||
" [-4.0211],\n",
|
||
" [ 2.3117],\n",
|
||
" [ 5.8492],\n",
|
||
" [10.0926],\n",
|
||
" [ 2.1932],\n",
|
||
" [ 8.0441],\n",
|
||
" [ 2.6943]])]"
|
||
]
|
||
},
|
||
"execution_count": 5,
|
||
"metadata": {},
|
||
"output_type": "execute_result"
|
||
}
|
||
],
|
||
"source": [
|
||
"next(iter(data_iter))"
|
||
]
|
||
},
|
||
{
|
||
"cell_type": "markdown",
|
||
"id": "4f57af75",
|
||
"metadata": {
|
||
"origin_pos": 14
|
||
},
|
||
"source": [
|
||
"## 定义模型\n",
|
||
"\n",
|
||
"当我们在 :numref:`sec_linear_scratch`中实现线性回归时,\n",
|
||
"我们明确定义了模型参数变量,并编写了计算的代码,这样通过基本的线性代数运算得到输出。\n",
|
||
"但是,如果模型变得更加复杂,且当我们几乎每天都需要实现模型时,自然会想简化这个过程。\n",
|
||
"这种情况类似于为自己的博客从零开始编写网页。\n",
|
||
"做一两次是有益的,但如果每个新博客就需要工程师花一个月的时间重新开始编写网页,那并不高效。\n",
|
||
"\n",
|
||
"对于标准深度学习模型,我们可以[**使用框架的预定义好的层**]。这使我们只需关注使用哪些层来构造模型,而不必关注层的实现细节。\n",
|
||
"我们首先定义一个模型变量`net`,它是一个`Sequential`类的实例。\n",
|
||
"`Sequential`类将多个层串联在一起。\n",
|
||
"当给定输入数据时,`Sequential`实例将数据传入到第一层,\n",
|
||
"然后将第一层的输出作为第二层的输入,以此类推。\n",
|
||
"在下面的例子中,我们的模型只包含一个层,因此实际上不需要`Sequential`。\n",
|
||
"但是由于以后几乎所有的模型都是多层的,在这里使用`Sequential`会让你熟悉“标准的流水线”。\n",
|
||
"\n",
|
||
"回顾 :numref:`fig_single_neuron`中的单层网络架构,\n",
|
||
"这一单层被称为*全连接层*(fully-connected layer),\n",
|
||
"因为它的每一个输入都通过矩阵-向量乘法得到它的每个输出。\n"
|
||
]
|
||
},
|
||
{
|
||
"cell_type": "markdown",
|
||
"id": "2b7cb683",
|
||
"metadata": {
|
||
"origin_pos": 16,
|
||
"tab": [
|
||
"pytorch"
|
||
]
|
||
},
|
||
"source": [
|
||
"在PyTorch中,全连接层在`Linear`类中定义。\n",
|
||
"值得注意的是,我们将两个参数传递到`nn.Linear`中。\n",
|
||
"第一个指定输入特征形状,即2,第二个指定输出特征形状,输出特征形状为单个标量,因此为1。\n"
|
||
]
|
||
},
|
||
{
|
||
"cell_type": "code",
|
||
"execution_count": 6,
|
||
"id": "85c54a1a",
|
||
"metadata": {
|
||
"execution": {
|
||
"iopub.execute_input": "2023-08-18T07:01:54.677177Z",
|
||
"iopub.status.busy": "2023-08-18T07:01:54.676580Z",
|
||
"iopub.status.idle": "2023-08-18T07:01:54.680914Z",
|
||
"shell.execute_reply": "2023-08-18T07:01:54.680130Z"
|
||
},
|
||
"origin_pos": 20,
|
||
"tab": [
|
||
"pytorch"
|
||
]
|
||
},
|
||
"outputs": [],
|
||
"source": [
|
||
"# nn是神经网络的缩写\n",
|
||
"from torch import nn\n",
|
||
"\n",
|
||
"net = nn.Sequential(nn.Linear(2, 1))"
|
||
]
|
||
},
|
||
{
|
||
"cell_type": "markdown",
|
||
"id": "fc18b2c1",
|
||
"metadata": {
|
||
"origin_pos": 23
|
||
},
|
||
"source": [
|
||
"## (**初始化模型参数**)\n",
|
||
"\n",
|
||
"在使用`net`之前,我们需要初始化模型参数。\n",
|
||
"如在线性回归模型中的权重和偏置。\n",
|
||
"深度学习框架通常有预定义的方法来初始化参数。\n",
|
||
"在这里,我们指定每个权重参数应该从均值为0、标准差为0.01的正态分布中随机采样,\n",
|
||
"偏置参数将初始化为零。\n"
|
||
]
|
||
},
|
||
{
|
||
"cell_type": "markdown",
|
||
"id": "f7452e3b",
|
||
"metadata": {
|
||
"origin_pos": 25,
|
||
"tab": [
|
||
"pytorch"
|
||
]
|
||
},
|
||
"source": [
|
||
"正如我们在构造`nn.Linear`时指定输入和输出尺寸一样,\n",
|
||
"现在我们能直接访问参数以设定它们的初始值。\n",
|
||
"我们通过`net[0]`选择网络中的第一个图层,\n",
|
||
"然后使用`weight.data`和`bias.data`方法访问参数。\n",
|
||
"我们还可以使用替换方法`normal_`和`fill_`来重写参数值。\n"
|
||
]
|
||
},
|
||
{
|
||
"cell_type": "code",
|
||
"execution_count": 7,
|
||
"id": "31716c55",
|
||
"metadata": {
|
||
"execution": {
|
||
"iopub.execute_input": "2023-08-18T07:01:54.684561Z",
|
||
"iopub.status.busy": "2023-08-18T07:01:54.684036Z",
|
||
"iopub.status.idle": "2023-08-18T07:01:54.690673Z",
|
||
"shell.execute_reply": "2023-08-18T07:01:54.689754Z"
|
||
},
|
||
"origin_pos": 29,
|
||
"tab": [
|
||
"pytorch"
|
||
]
|
||
},
|
||
"outputs": [
|
||
{
|
||
"data": {
|
||
"text/plain": [
|
||
"tensor([0.])"
|
||
]
|
||
},
|
||
"execution_count": 7,
|
||
"metadata": {},
|
||
"output_type": "execute_result"
|
||
}
|
||
],
|
||
"source": [
|
||
"net[0].weight.data.normal_(0, 0.01)\n",
|
||
"net[0].bias.data.fill_(0)"
|
||
]
|
||
},
|
||
{
|
||
"cell_type": "markdown",
|
||
"id": "94568f78",
|
||
"metadata": {
|
||
"origin_pos": 33,
|
||
"tab": [
|
||
"pytorch"
|
||
]
|
||
},
|
||
"source": [
|
||
"\n"
|
||
]
|
||
},
|
||
{
|
||
"cell_type": "markdown",
|
||
"id": "e9592f9a",
|
||
"metadata": {
|
||
"origin_pos": 35
|
||
},
|
||
"source": [
|
||
"## 定义损失函数\n"
|
||
]
|
||
},
|
||
{
|
||
"cell_type": "markdown",
|
||
"id": "9a431ee3",
|
||
"metadata": {
|
||
"origin_pos": 37,
|
||
"tab": [
|
||
"pytorch"
|
||
]
|
||
},
|
||
"source": [
|
||
"[**计算均方误差使用的是`MSELoss`类,也称为平方$L_2$范数**]。\n",
|
||
"默认情况下,它返回所有样本损失的平均值。\n"
|
||
]
|
||
},
|
||
{
|
||
"cell_type": "code",
|
||
"execution_count": 8,
|
||
"id": "19a417ac",
|
||
"metadata": {
|
||
"execution": {
|
||
"iopub.execute_input": "2023-08-18T07:01:54.695575Z",
|
||
"iopub.status.busy": "2023-08-18T07:01:54.694922Z",
|
||
"iopub.status.idle": "2023-08-18T07:01:54.699373Z",
|
||
"shell.execute_reply": "2023-08-18T07:01:54.698348Z"
|
||
},
|
||
"origin_pos": 41,
|
||
"tab": [
|
||
"pytorch"
|
||
]
|
||
},
|
||
"outputs": [],
|
||
"source": [
|
||
"loss = nn.MSELoss()"
|
||
]
|
||
},
|
||
{
|
||
"cell_type": "markdown",
|
||
"id": "30dbe343",
|
||
"metadata": {
|
||
"origin_pos": 44
|
||
},
|
||
"source": [
|
||
"## 定义优化算法\n"
|
||
]
|
||
},
|
||
{
|
||
"cell_type": "markdown",
|
||
"id": "2663da90",
|
||
"metadata": {
|
||
"origin_pos": 46,
|
||
"tab": [
|
||
"pytorch"
|
||
]
|
||
},
|
||
"source": [
|
||
"小批量随机梯度下降算法是一种优化神经网络的标准工具,\n",
|
||
"PyTorch在`optim`模块中实现了该算法的许多变种。\n",
|
||
"当我们(**实例化一个`SGD`实例**)时,我们要指定优化的参数\n",
|
||
"(可通过`net.parameters()`从我们的模型中获得)以及优化算法所需的超参数字典。\n",
|
||
"小批量随机梯度下降只需要设置`lr`值,这里设置为0.03。\n"
|
||
]
|
||
},
|
||
{
|
||
"cell_type": "code",
|
||
"execution_count": 9,
|
||
"id": "1ae0989f",
|
||
"metadata": {
|
||
"execution": {
|
||
"iopub.execute_input": "2023-08-18T07:01:54.703905Z",
|
||
"iopub.status.busy": "2023-08-18T07:01:54.703368Z",
|
||
"iopub.status.idle": "2023-08-18T07:01:54.708081Z",
|
||
"shell.execute_reply": "2023-08-18T07:01:54.706987Z"
|
||
},
|
||
"origin_pos": 50,
|
||
"tab": [
|
||
"pytorch"
|
||
]
|
||
},
|
||
"outputs": [],
|
||
"source": [
|
||
"trainer = torch.optim.SGD(net.parameters(), lr=0.03)"
|
||
]
|
||
},
|
||
{
|
||
"cell_type": "markdown",
|
||
"id": "004056f1",
|
||
"metadata": {
|
||
"origin_pos": 53
|
||
},
|
||
"source": [
|
||
"## 训练\n",
|
||
"\n",
|
||
"通过深度学习框架的高级API来实现我们的模型只需要相对较少的代码。\n",
|
||
"我们不必单独分配参数、不必定义我们的损失函数,也不必手动实现小批量随机梯度下降。\n",
|
||
"当我们需要更复杂的模型时,高级API的优势将大大增加。\n",
|
||
"当我们有了所有的基本组件,[**训练过程代码与我们从零开始实现时所做的非常相似**]。\n",
|
||
"\n",
|
||
"回顾一下:在每个迭代周期里,我们将完整遍历一次数据集(`train_data`),\n",
|
||
"不停地从中获取一个小批量的输入和相应的标签。\n",
|
||
"对于每一个小批量,我们会进行以下步骤:\n",
|
||
"\n",
|
||
"* 通过调用`net(X)`生成预测并计算损失`l`(前向传播)。\n",
|
||
"* 通过进行反向传播来计算梯度。\n",
|
||
"* 通过调用优化器来更新模型参数。\n",
|
||
"\n",
|
||
"为了更好的衡量训练效果,我们计算每个迭代周期后的损失,并打印它来监控训练过程。\n"
|
||
]
|
||
},
|
||
{
|
||
"cell_type": "code",
|
||
"execution_count": 10,
|
||
"id": "1270d706",
|
||
"metadata": {
|
||
"execution": {
|
||
"iopub.execute_input": "2023-08-18T07:01:54.712705Z",
|
||
"iopub.status.busy": "2023-08-18T07:01:54.712113Z",
|
||
"iopub.status.idle": "2023-08-18T07:01:54.922720Z",
|
||
"shell.execute_reply": "2023-08-18T07:01:54.921580Z"
|
||
},
|
||
"origin_pos": 55,
|
||
"tab": [
|
||
"pytorch"
|
||
]
|
||
},
|
||
"outputs": [
|
||
{
|
||
"name": "stdout",
|
||
"output_type": "stream",
|
||
"text": [
|
||
"epoch 1, loss 0.000248\n",
|
||
"epoch 2, loss 0.000103\n",
|
||
"epoch 3, loss 0.000103\n"
|
||
]
|
||
}
|
||
],
|
||
"source": [
|
||
"num_epochs = 3\n",
|
||
"for epoch in range(num_epochs):\n",
|
||
" for X, y in data_iter:\n",
|
||
" l = loss(net(X) ,y)\n",
|
||
" trainer.zero_grad()\n",
|
||
" l.backward()\n",
|
||
" trainer.step()\n",
|
||
" l = loss(net(features), labels)\n",
|
||
" print(f'epoch {epoch + 1}, loss {l:f}')"
|
||
]
|
||
},
|
||
{
|
||
"cell_type": "markdown",
|
||
"id": "2f52dea0",
|
||
"metadata": {
|
||
"origin_pos": 58
|
||
},
|
||
"source": [
|
||
"下面我们[**比较生成数据集的真实参数和通过有限数据训练获得的模型参数**]。\n",
|
||
"要访问参数,我们首先从`net`访问所需的层,然后读取该层的权重和偏置。\n",
|
||
"正如在从零开始实现中一样,我们估计得到的参数与生成数据的真实参数非常接近。\n"
|
||
]
|
||
},
|
||
{
|
||
"cell_type": "code",
|
||
"execution_count": 11,
|
||
"id": "aa7cef5a",
|
||
"metadata": {
|
||
"execution": {
|
||
"iopub.execute_input": "2023-08-18T07:01:54.927464Z",
|
||
"iopub.status.busy": "2023-08-18T07:01:54.927072Z",
|
||
"iopub.status.idle": "2023-08-18T07:01:54.935672Z",
|
||
"shell.execute_reply": "2023-08-18T07:01:54.934585Z"
|
||
},
|
||
"origin_pos": 60,
|
||
"tab": [
|
||
"pytorch"
|
||
]
|
||
},
|
||
"outputs": [
|
||
{
|
||
"name": "stdout",
|
||
"output_type": "stream",
|
||
"text": [
|
||
"w的估计误差: tensor([-0.0010, -0.0003])\n",
|
||
"b的估计误差: tensor([-0.0003])\n"
|
||
]
|
||
}
|
||
],
|
||
"source": [
|
||
"w = net[0].weight.data\n",
|
||
"print('w的估计误差:', true_w - w.reshape(true_w.shape))\n",
|
||
"b = net[0].bias.data\n",
|
||
"print('b的估计误差:', true_b - b)"
|
||
]
|
||
},
|
||
{
|
||
"cell_type": "markdown",
|
||
"id": "f62d52d4",
|
||
"metadata": {
|
||
"origin_pos": 63
|
||
},
|
||
"source": [
|
||
"## 小结\n"
|
||
]
|
||
},
|
||
{
|
||
"cell_type": "markdown",
|
||
"id": "b6db4aa3",
|
||
"metadata": {
|
||
"origin_pos": 65,
|
||
"tab": [
|
||
"pytorch"
|
||
]
|
||
},
|
||
"source": [
|
||
"* 我们可以使用PyTorch的高级API更简洁地实现模型。\n",
|
||
"* 在PyTorch中,`data`模块提供了数据处理工具,`nn`模块定义了大量的神经网络层和常见损失函数。\n",
|
||
"* 我们可以通过`_`结尾的方法将参数替换,从而初始化参数。\n"
|
||
]
|
||
},
|
||
{
|
||
"cell_type": "markdown",
|
||
"id": "eb6af2c7",
|
||
"metadata": {
|
||
"origin_pos": 67
|
||
},
|
||
"source": [
|
||
"## 练习\n",
|
||
"\n",
|
||
"1. 如果将小批量的总损失替换为小批量损失的平均值,需要如何更改学习率?\n",
|
||
"1. 查看深度学习框架文档,它们提供了哪些损失函数和初始化方法?用Huber损失代替原损失,即\n",
|
||
" $$l(y,y') = \\begin{cases}|y-y'| -\\frac{\\sigma}{2} & \\text{ if } |y-y'| > \\sigma \\\\ \\frac{1}{2 \\sigma} (y-y')^2 & \\text{ 其它情况}\\end{cases}$$\n",
|
||
"1. 如何访问线性回归的梯度?\n"
|
||
]
|
||
},
|
||
{
|
||
"cell_type": "markdown",
|
||
"id": "4e43317d",
|
||
"metadata": {
|
||
"origin_pos": 69,
|
||
"tab": [
|
||
"pytorch"
|
||
]
|
||
},
|
||
"source": [
|
||
"[Discussions](https://discuss.d2l.ai/t/1781)\n"
|
||
]
|
||
}
|
||
],
|
||
"metadata": {
|
||
"language_info": {
|
||
"name": "python"
|
||
},
|
||
"required_libs": []
|
||
},
|
||
"nbformat": 4,
|
||
"nbformat_minor": 5
|
||
} |