This commit is contained in:
2025-12-16 09:23:53 +08:00
parent 19138d3cc1
commit 9e7efd0626
409 changed files with 272713 additions and 241 deletions
@@ -0,0 +1,108 @@
{
"cells": [
{
"cell_type": "markdown",
"id": "4eb98fe2",
"metadata": {
"origin_pos": 0
},
"source": [
"# 近似训练\n",
":label:`sec_approx_train`\n",
"\n",
"回想一下我们在 :numref:`sec_word2vec`中的讨论。跳元模型的主要思想是使用softmax运算来计算基于给定的中心词$w_c$生成上下文字$w_o$的条件概率(如 :eqref:`eq_skip-gram-softmax`),对应的对数损失在 :eqref:`eq_skip-gram-log`给出。\n",
"\n",
"由于softmax操作的性质,上下文词可以是词表$\\mathcal{V}$中的任意项, :eqref:`eq_skip-gram-log`包含与整个词表大小一样多的项的求和。因此, :eqref:`eq_skip-gram-grad`中跳元模型的梯度计算和 :eqref:`eq_cbow-gradient`中的连续词袋模型的梯度计算都包含求和。不幸的是,在一个词典上(通常有几十万或数百万个单词)求和的梯度的计算成本是巨大的!\n",
"\n",
"为了降低上述计算复杂度,本节将介绍两种近似训练方法:*负采样*和*分层softmax*。\n",
"由于跳元模型和连续词袋模型的相似性,我们将以跳元模型为例来描述这两种近似训练方法。\n",
"\n",
"## 负采样\n",
":label:`subsec_negative-sampling`\n",
"\n",
"负采样修改了原目标函数。给定中心词$w_c$的上下文窗口,任意上下文词$w_o$来自该上下文窗口的被认为是由下式建模概率的事件:\n",
"\n",
"$$P(D=1\\mid w_c, w_o) = \\sigma(\\mathbf{u}_o^\\top \\mathbf{v}_c),$$\n",
"\n",
"其中$\\sigma$使用了sigmoid激活函数的定义:\n",
"\n",
"$$\\sigma(x) = \\frac{1}{1+\\exp(-x)}.$$\n",
":eqlabel:`eq_sigma-f`\n",
"\n",
"让我们从最大化文本序列中所有这些事件的联合概率开始训练词嵌入。具体而言,给定长度为$T$的文本序列,以$w^{(t)}$表示时间步$t$的词,并使上下文窗口为$m$,考虑最大化联合概率:\n",
"\n",
"$$ \\prod_{t=1}^{T} \\prod_{-m \\leq j \\leq m,\\ j \\neq 0} P(D=1\\mid w^{(t)}, w^{(t+j)}).$$\n",
":eqlabel:`eq-negative-sample-pos`\n",
"\n",
"然而, :eqref:`eq-negative-sample-pos`只考虑那些正样本的事件。仅当所有词向量都等于无穷大时, :eqref:`eq-negative-sample-pos`中的联合概率才最大化为1。当然,这样的结果毫无意义。为了使目标函数更有意义,*负采样*添加从预定义分布中采样的负样本。\n",
"\n",
"用$S$表示上下文词$w_o$来自中心词$w_c$的上下文窗口的事件。对于这个涉及$w_o$的事件,从预定义分布$P(w)$中采样$K$个不是来自这个上下文窗口*噪声词*。用$N_k$表示噪声词$w_k$$k=1, \\ldots, K$)不是来自$w_c$的上下文窗口的事件。假设正例和负例$S, N_1, \\ldots, N_K$的这些事件是相互独立的。负采样将 :eqref:`eq-negative-sample-pos`中的联合概率(仅涉及正例)重写为\n",
"\n",
"$$ \\prod_{t=1}^{T} \\prod_{-m \\leq j \\leq m,\\ j \\neq 0} P(w^{(t+j)} \\mid w^{(t)}),$$\n",
"\n",
"通过事件$S, N_1, \\ldots, N_K$近似条件概率:\n",
"\n",
"$$ P(w^{(t+j)} \\mid w^{(t)}) =P(D=1\\mid w^{(t)}, w^{(t+j)})\\prod_{k=1,\\ w_k \\sim P(w)}^K P(D=0\\mid w^{(t)}, w_k).$$\n",
":eqlabel:`eq-negative-sample-conditional-prob`\n",
"\n",
"分别用$i_t$和$h_k$表示词$w^{(t)}$和噪声词$w_k$在文本序列的时间步$t$处的索引。 :eqref:`eq-negative-sample-conditional-prob`中关于条件概率的对数损失为:\n",
"\n",
"$$\n",
"\\begin{aligned}\n",
"-\\log P(w^{(t+j)} \\mid w^{(t)})\n",
"=& -\\log P(D=1\\mid w^{(t)}, w^{(t+j)}) - \\sum_{k=1,\\ w_k \\sim P(w)}^K \\log P(D=0\\mid w^{(t)}, w_k)\\\\\n",
"=&- \\log\\, \\sigma\\left(\\mathbf{u}_{i_{t+j}}^\\top \\mathbf{v}_{i_t}\\right) - \\sum_{k=1,\\ w_k \\sim P(w)}^K \\log\\left(1-\\sigma\\left(\\mathbf{u}_{h_k}^\\top \\mathbf{v}_{i_t}\\right)\\right)\\\\\n",
"=&- \\log\\, \\sigma\\left(\\mathbf{u}_{i_{t+j}}^\\top \\mathbf{v}_{i_t}\\right) - \\sum_{k=1,\\ w_k \\sim P(w)}^K \\log\\sigma\\left(-\\mathbf{u}_{h_k}^\\top \\mathbf{v}_{i_t}\\right).\n",
"\\end{aligned}\n",
"$$\n",
"\n",
"我们可以看到,现在每个训练步的梯度计算成本与词表大小无关,而是线性依赖于$K$。当将超参数$K$设置为较小的值时,在负采样的每个训练步处的梯度的计算成本较小。\n",
"\n",
"## 层序Softmax\n",
"\n",
"作为另一种近似训练方法,*层序Softmax*hierarchical softmax)使用二叉树( :numref:`fig_hi_softmax`中说明的数据结构),其中树的每个叶节点表示词表$\\mathcal{V}$中的一个词。\n",
"\n",
"![用于近似训练的分层softmax,其中树的每个叶节点表示词表中的一个词](../img/hi-softmax.svg)\n",
":label:`fig_hi_softmax`\n",
"\n",
"用$L(w)$表示二叉树中表示字$w$的从根节点到叶节点的路径上的节点数(包括两端)。设$n(w,j)$为该路径上的$j^\\mathrm{th}$节点,其上下文字向量为$\\mathbf{u}_{n(w, j)}$。例如, :numref:`fig_hi_softmax`中的$L(w_3) = 4$。分层softmax将 :eqref:`eq_skip-gram-softmax`中的条件概率近似为\n",
"\n",
"$$P(w_o \\mid w_c) = \\prod_{j=1}^{L(w_o)-1} \\sigma\\left( [\\![ n(w_o, j+1) = \\text{leftChild}(n(w_o, j)) ]\\!] \\cdot \\mathbf{u}_{n(w_o, j)}^\\top \\mathbf{v}_c\\right),$$\n",
"\n",
"其中函数$\\sigma$在 :eqref:`eq_sigma-f`中定义,$\\text{leftChild}(n)$是节点$n$的左子节点:如果$x$为真,$[\\![x]\\!] = 1$;否则$[\\![x]\\!] = -1$。\n",
"\n",
"为了说明,让我们计算 :numref:`fig_hi_softmax`中给定词$w_c$生成词$w_3$的条件概率。这需要$w_c$的词向量$\\mathbf{v}_c$和从根到$w_3$的路径( :numref:`fig_hi_softmax`中加粗的路径)上的非叶节点向量之间的点积,该路径依次向左、向右和向左遍历:\n",
"\n",
"$$P(w_3 \\mid w_c) = \\sigma(\\mathbf{u}_{n(w_3, 1)}^\\top \\mathbf{v}_c) \\cdot \\sigma(-\\mathbf{u}_{n(w_3, 2)}^\\top \\mathbf{v}_c) \\cdot \\sigma(\\mathbf{u}_{n(w_3, 3)}^\\top \\mathbf{v}_c).$$\n",
"\n",
"由$\\sigma(x)+\\sigma(-x) = 1$,它认为基于任意词$w_c$生成词表$\\mathcal{V}$中所有词的条件概率总和为1\n",
"\n",
"$$\\sum_{w \\in \\mathcal{V}} P(w \\mid w_c) = 1.$$\n",
":eqlabel:`eq_hi-softmax-sum-one`\n",
"\n",
"幸运的是,由于二叉树结构,$L(w_o)-1$大约与$\\mathcal{O}(\\text{log}_2|\\mathcal{V}|)$是一个数量级。当词表大小$\\mathcal{V}$很大时,与没有近似训练的相比,使用分层softmax的每个训练步的计算代价显著降低。\n",
"\n",
"## 小结\n",
"\n",
"* 负采样通过考虑相互独立的事件来构造损失函数,这些事件同时涉及正例和负例。训练的计算量与每一步的噪声词数成线性关系。\n",
"* 分层softmax使用二叉树中从根节点到叶节点的路径构造损失函数。训练的计算成本取决于词表大小的对数。\n",
"\n",
"## 练习\n",
"\n",
"1. 如何在负采样中对噪声词进行采样?\n",
"1. 验证 :eqref:`eq_hi-softmax-sum-one`是否有效。\n",
"1. 如何分别使用负采样和分层softmax训练连续词袋模型?\n",
"\n",
"[Discussions](https://discuss.d2l.ai/t/5741)\n"
]
}
],
"metadata": {
"language_info": {
"name": "python"
},
"required_libs": []
},
"nbformat": 4,
"nbformat_minor": 5
}
@@ -0,0 +1,581 @@
{
"cells": [
{
"cell_type": "markdown",
"id": "e6875f27",
"metadata": {
"origin_pos": 0
},
"source": [
"# 用于预训练BERT的数据集\n",
":label:`sec_bert-dataset`\n",
"\n",
"为了预训练 :numref:`sec_bert`中实现的BERT模型,我们需要以理想的格式生成数据集,以便于两个预训练任务:遮蔽语言模型和下一句预测。一方面,最初的BERT模型是在两个庞大的图书语料库和英语维基百科(参见 :numref:`subsec_bert_pretraining_tasks`)的合集上预训练的,但它很难吸引这本书的大多数读者。另一方面,现成的预训练BERT模型可能不适合医学等特定领域的应用。因此,在定制的数据集上对BERT进行预训练变得越来越流行。为了方便BERT预训练的演示,我们使用了较小的语料库WikiText-2 :cite:`Merity.Xiong.Bradbury.ea.2016`。\n",
"\n",
"与 :numref:`sec_word2vec_data`中用于预训练word2vec的PTB数据集相比,WikiText-2(1)保留了原来的标点符号,适合于下一句预测;(2)保留了原来的大小写和数字;(3)大了一倍以上。\n"
]
},
{
"cell_type": "code",
"execution_count": 1,
"id": "342b7589",
"metadata": {
"execution": {
"iopub.execute_input": "2023-08-18T07:00:38.284931Z",
"iopub.status.busy": "2023-08-18T07:00:38.284353Z",
"iopub.status.idle": "2023-08-18T07:00:41.113963Z",
"shell.execute_reply": "2023-08-18T07:00:41.112838Z"
},
"origin_pos": 2,
"tab": [
"pytorch"
]
},
"outputs": [],
"source": [
"import os\n",
"import random\n",
"import torch\n",
"from d2l import torch as d2l"
]
},
{
"cell_type": "markdown",
"id": "691a2248",
"metadata": {
"origin_pos": 4
},
"source": [
"在WikiText-2数据集中,每行代表一个段落,其中在任意标点符号及其前面的词元之间插入空格。保留至少有两句话的段落。为了简单起见,我们仅使用句号作为分隔符来拆分句子。我们将更复杂的句子拆分技术的讨论留在本节末尾的练习中。\n"
]
},
{
"cell_type": "code",
"execution_count": 2,
"id": "eb911790",
"metadata": {
"execution": {
"iopub.execute_input": "2023-08-18T07:00:41.118878Z",
"iopub.status.busy": "2023-08-18T07:00:41.118515Z",
"iopub.status.idle": "2023-08-18T07:00:41.124582Z",
"shell.execute_reply": "2023-08-18T07:00:41.123696Z"
},
"origin_pos": 5,
"tab": [
"pytorch"
]
},
"outputs": [],
"source": [
"#@save\n",
"d2l.DATA_HUB['wikitext-2'] = (\n",
" 'https://s3.amazonaws.com/research.metamind.io/wikitext/'\n",
" 'wikitext-2-v1.zip', '3c914d17d80b1459be871a5039ac23e752a53cbe')\n",
"\n",
"#@save\n",
"def _read_wiki(data_dir):\n",
" file_name = os.path.join(data_dir, 'wiki.train.tokens')\n",
" with open(file_name, 'r') as f:\n",
" lines = f.readlines()\n",
" # 大写字母转换为小写字母\n",
" paragraphs = [line.strip().lower().split(' . ')\n",
" for line in lines if len(line.split(' . ')) >= 2]\n",
" random.shuffle(paragraphs)\n",
" return paragraphs"
]
},
{
"cell_type": "markdown",
"id": "f2f5515b",
"metadata": {
"origin_pos": 6
},
"source": [
"## 为预训练任务定义辅助函数\n",
"\n",
"在下文中,我们首先为BERT的两个预训练任务实现辅助函数。这些辅助函数将在稍后将原始文本语料库转换为理想格式的数据集时调用,以预训练BERT。\n",
"\n",
"### 生成下一句预测任务的数据\n",
"\n",
"根据 :numref:`subsec_nsp`的描述,`_get_next_sentence`函数生成二分类任务的训练样本。\n"
]
},
{
"cell_type": "code",
"execution_count": 3,
"id": "246ca273",
"metadata": {
"execution": {
"iopub.execute_input": "2023-08-18T07:00:41.128645Z",
"iopub.status.busy": "2023-08-18T07:00:41.128375Z",
"iopub.status.idle": "2023-08-18T07:00:41.133471Z",
"shell.execute_reply": "2023-08-18T07:00:41.132347Z"
},
"origin_pos": 7,
"tab": [
"pytorch"
]
},
"outputs": [],
"source": [
"#@save\n",
"def _get_next_sentence(sentence, next_sentence, paragraphs):\n",
" if random.random() < 0.5:\n",
" is_next = True\n",
" else:\n",
" # paragraphs是三重列表的嵌套\n",
" next_sentence = random.choice(random.choice(paragraphs))\n",
" is_next = False\n",
" return sentence, next_sentence, is_next"
]
},
{
"cell_type": "markdown",
"id": "13b1d432",
"metadata": {
"origin_pos": 8
},
"source": [
"下面的函数通过调用`_get_next_sentence`函数从输入`paragraph`生成用于下一句预测的训练样本。这里`paragraph`是句子列表,其中每个句子都是词元列表。自变量`max_len`指定预训练期间的BERT输入序列的最大长度。\n"
]
},
{
"cell_type": "code",
"execution_count": 4,
"id": "a7686fde",
"metadata": {
"execution": {
"iopub.execute_input": "2023-08-18T07:00:41.137934Z",
"iopub.status.busy": "2023-08-18T07:00:41.137439Z",
"iopub.status.idle": "2023-08-18T07:00:41.143146Z",
"shell.execute_reply": "2023-08-18T07:00:41.142265Z"
},
"origin_pos": 9,
"tab": [
"pytorch"
]
},
"outputs": [],
"source": [
"#@save\n",
"def _get_nsp_data_from_paragraph(paragraph, paragraphs, vocab, max_len):\n",
" nsp_data_from_paragraph = []\n",
" for i in range(len(paragraph) - 1):\n",
" tokens_a, tokens_b, is_next = _get_next_sentence(\n",
" paragraph[i], paragraph[i + 1], paragraphs)\n",
" # 考虑1个'<cls>'词元和2个'<sep>'词元\n",
" if len(tokens_a) + len(tokens_b) + 3 > max_len:\n",
" continue\n",
" tokens, segments = d2l.get_tokens_and_segments(tokens_a, tokens_b)\n",
" nsp_data_from_paragraph.append((tokens, segments, is_next))\n",
" return nsp_data_from_paragraph"
]
},
{
"cell_type": "markdown",
"id": "86277b80",
"metadata": {
"origin_pos": 10
},
"source": [
"### 生成遮蔽语言模型任务的数据\n",
":label:`subsec_prepare_mlm_data`\n",
"\n",
"为了从BERT输入序列生成遮蔽语言模型的训练样本,我们定义了以下`_replace_mlm_tokens`函数。在其输入中,`tokens`是表示BERT输入序列的词元的列表,`candidate_pred_positions`是不包括特殊词元的BERT输入序列的词元索引的列表(特殊词元在遮蔽语言模型任务中不被预测),以及`num_mlm_preds`指示预测的数量(选择15%要预测的随机词元)。在 :numref:`subsec_mlm`中定义遮蔽语言模型任务之后,在每个预测位置,输入可以由特殊的“掩码”词元或随机词元替换,或者保持不变。最后,该函数返回可能替换后的输入词元、发生预测的词元索引和这些预测的标签。\n"
]
},
{
"cell_type": "code",
"execution_count": 5,
"id": "5e3de2c8",
"metadata": {
"execution": {
"iopub.execute_input": "2023-08-18T07:00:41.147428Z",
"iopub.status.busy": "2023-08-18T07:00:41.146946Z",
"iopub.status.idle": "2023-08-18T07:00:41.155481Z",
"shell.execute_reply": "2023-08-18T07:00:41.154569Z"
},
"origin_pos": 11,
"tab": [
"pytorch"
]
},
"outputs": [],
"source": [
"#@save\n",
"def _replace_mlm_tokens(tokens, candidate_pred_positions, num_mlm_preds,\n",
" vocab):\n",
" # 为遮蔽语言模型的输入创建新的词元副本,其中输入可能包含替换的“<mask>”或随机词元\n",
" mlm_input_tokens = [token for token in tokens]\n",
" pred_positions_and_labels = []\n",
" # 打乱后用于在遮蔽语言模型任务中获取15%的随机词元进行预测\n",
" random.shuffle(candidate_pred_positions)\n",
" for mlm_pred_position in candidate_pred_positions:\n",
" if len(pred_positions_and_labels) >= num_mlm_preds:\n",
" break\n",
" masked_token = None\n",
" # 80%的时间:将词替换为“<mask>”词元\n",
" if random.random() < 0.8:\n",
" masked_token = '<mask>'\n",
" else:\n",
" # 10%的时间:保持词不变\n",
" if random.random() < 0.5:\n",
" masked_token = tokens[mlm_pred_position]\n",
" # 10%的时间:用随机词替换该词\n",
" else:\n",
" masked_token = random.choice(vocab.idx_to_token)\n",
" mlm_input_tokens[mlm_pred_position] = masked_token\n",
" pred_positions_and_labels.append(\n",
" (mlm_pred_position, tokens[mlm_pred_position]))\n",
" return mlm_input_tokens, pred_positions_and_labels"
]
},
{
"cell_type": "markdown",
"id": "81ce2383",
"metadata": {
"origin_pos": 12
},
"source": [
"通过调用前述的`_replace_mlm_tokens`函数,以下函数将BERT输入序列(`tokens`)作为输入,并返回输入词元的索引(在 :numref:`subsec_mlm`中描述的可能的词元替换之后)、发生预测的词元索引以及这些预测的标签索引。\n"
]
},
{
"cell_type": "code",
"execution_count": 6,
"id": "841a4650",
"metadata": {
"execution": {
"iopub.execute_input": "2023-08-18T07:00:41.160061Z",
"iopub.status.busy": "2023-08-18T07:00:41.159300Z",
"iopub.status.idle": "2023-08-18T07:00:41.165820Z",
"shell.execute_reply": "2023-08-18T07:00:41.164855Z"
},
"origin_pos": 13,
"tab": [
"pytorch"
]
},
"outputs": [],
"source": [
"#@save\n",
"def _get_mlm_data_from_tokens(tokens, vocab):\n",
" candidate_pred_positions = []\n",
" # tokens是一个字符串列表\n",
" for i, token in enumerate(tokens):\n",
" # 在遮蔽语言模型任务中不会预测特殊词元\n",
" if token in ['<cls>', '<sep>']:\n",
" continue\n",
" candidate_pred_positions.append(i)\n",
" # 遮蔽语言模型任务中预测15%的随机词元\n",
" num_mlm_preds = max(1, round(len(tokens) * 0.15))\n",
" mlm_input_tokens, pred_positions_and_labels = _replace_mlm_tokens(\n",
" tokens, candidate_pred_positions, num_mlm_preds, vocab)\n",
" pred_positions_and_labels = sorted(pred_positions_and_labels,\n",
" key=lambda x: x[0])\n",
" pred_positions = [v[0] for v in pred_positions_and_labels]\n",
" mlm_pred_labels = [v[1] for v in pred_positions_and_labels]\n",
" return vocab[mlm_input_tokens], pred_positions, vocab[mlm_pred_labels]"
]
},
{
"cell_type": "markdown",
"id": "396550b1",
"metadata": {
"origin_pos": 14
},
"source": [
"## 将文本转换为预训练数据集\n",
"\n",
"现在我们几乎准备好为BERT预训练定制一个`Dataset`类。在此之前,我们仍然需要定义辅助函数`_pad_bert_inputs`来将特殊的“&lt;mask&gt;”词元附加到输入。它的参数`examples`包含来自两个预训练任务的辅助函数`_get_nsp_data_from_paragraph`和`_get_mlm_data_from_tokens`的输出。\n"
]
},
{
"cell_type": "code",
"execution_count": 7,
"id": "6552099b",
"metadata": {
"execution": {
"iopub.execute_input": "2023-08-18T07:00:41.170203Z",
"iopub.status.busy": "2023-08-18T07:00:41.169578Z",
"iopub.status.idle": "2023-08-18T07:00:41.180126Z",
"shell.execute_reply": "2023-08-18T07:00:41.179219Z"
},
"origin_pos": 16,
"tab": [
"pytorch"
]
},
"outputs": [],
"source": [
"#@save\n",
"def _pad_bert_inputs(examples, max_len, vocab):\n",
" max_num_mlm_preds = round(max_len * 0.15)\n",
" all_token_ids, all_segments, valid_lens, = [], [], []\n",
" all_pred_positions, all_mlm_weights, all_mlm_labels = [], [], []\n",
" nsp_labels = []\n",
" for (token_ids, pred_positions, mlm_pred_label_ids, segments,\n",
" is_next) in examples:\n",
" all_token_ids.append(torch.tensor(token_ids + [vocab['<pad>']] * (\n",
" max_len - len(token_ids)), dtype=torch.long))\n",
" all_segments.append(torch.tensor(segments + [0] * (\n",
" max_len - len(segments)), dtype=torch.long))\n",
" # valid_lens不包括'<pad>'的计数\n",
" valid_lens.append(torch.tensor(len(token_ids), dtype=torch.float32))\n",
" all_pred_positions.append(torch.tensor(pred_positions + [0] * (\n",
" max_num_mlm_preds - len(pred_positions)), dtype=torch.long))\n",
" # 填充词元的预测将通过乘以0权重在损失中过滤掉\n",
" all_mlm_weights.append(\n",
" torch.tensor([1.0] * len(mlm_pred_label_ids) + [0.0] * (\n",
" max_num_mlm_preds - len(pred_positions)),\n",
" dtype=torch.float32))\n",
" all_mlm_labels.append(torch.tensor(mlm_pred_label_ids + [0] * (\n",
" max_num_mlm_preds - len(mlm_pred_label_ids)), dtype=torch.long))\n",
" nsp_labels.append(torch.tensor(is_next, dtype=torch.long))\n",
" return (all_token_ids, all_segments, valid_lens, all_pred_positions,\n",
" all_mlm_weights, all_mlm_labels, nsp_labels)"
]
},
{
"cell_type": "markdown",
"id": "d4e8a88c",
"metadata": {
"origin_pos": 18
},
"source": [
"将用于生成两个预训练任务的训练样本的辅助函数和用于填充输入的辅助函数放在一起,我们定义以下`_WikiTextDataset`类为用于预训练BERT的WikiText-2数据集。通过实现`__getitem__ `函数,我们可以任意访问WikiText-2语料库的一对句子生成的预训练样本(遮蔽语言模型和下一句预测)样本。\n",
"\n",
"最初的BERT模型使用词表大小为30000的WordPiece嵌入 :cite:`Wu.Schuster.Chen.ea.2016`。WordPiece的词元化方法是对 :numref:`subsec_Byte_Pair_Encoding`中原有的字节对编码算法稍作修改。为简单起见,我们使用`d2l.tokenize`函数进行词元化。出现次数少于5次的不频繁词元将被过滤掉。\n"
]
},
{
"cell_type": "code",
"execution_count": 8,
"id": "c4d049c9",
"metadata": {
"execution": {
"iopub.execute_input": "2023-08-18T07:00:41.184551Z",
"iopub.status.busy": "2023-08-18T07:00:41.183947Z",
"iopub.status.idle": "2023-08-18T07:00:41.192539Z",
"shell.execute_reply": "2023-08-18T07:00:41.191426Z"
},
"origin_pos": 20,
"tab": [
"pytorch"
]
},
"outputs": [],
"source": [
"#@save\n",
"class _WikiTextDataset(torch.utils.data.Dataset):\n",
" def __init__(self, paragraphs, max_len):\n",
" # 输入paragraphs[i]是代表段落的句子字符串列表;\n",
" # 而输出paragraphs[i]是代表段落的句子列表,其中每个句子都是词元列表\n",
" paragraphs = [d2l.tokenize(\n",
" paragraph, token='word') for paragraph in paragraphs]\n",
" sentences = [sentence for paragraph in paragraphs\n",
" for sentence in paragraph]\n",
" self.vocab = d2l.Vocab(sentences, min_freq=5, reserved_tokens=[\n",
" '<pad>', '<mask>', '<cls>', '<sep>'])\n",
" # 获取下一句子预测任务的数据\n",
" examples = []\n",
" for paragraph in paragraphs:\n",
" examples.extend(_get_nsp_data_from_paragraph(\n",
" paragraph, paragraphs, self.vocab, max_len))\n",
" # 获取遮蔽语言模型任务的数据\n",
" examples = [(_get_mlm_data_from_tokens(tokens, self.vocab)\n",
" + (segments, is_next))\n",
" for tokens, segments, is_next in examples]\n",
" # 填充输入\n",
" (self.all_token_ids, self.all_segments, self.valid_lens,\n",
" self.all_pred_positions, self.all_mlm_weights,\n",
" self.all_mlm_labels, self.nsp_labels) = _pad_bert_inputs(\n",
" examples, max_len, self.vocab)\n",
"\n",
" def __getitem__(self, idx):\n",
" return (self.all_token_ids[idx], self.all_segments[idx],\n",
" self.valid_lens[idx], self.all_pred_positions[idx],\n",
" self.all_mlm_weights[idx], self.all_mlm_labels[idx],\n",
" self.nsp_labels[idx])\n",
"\n",
" def __len__(self):\n",
" return len(self.all_token_ids)"
]
},
{
"cell_type": "markdown",
"id": "0ede31c0",
"metadata": {
"origin_pos": 22
},
"source": [
"通过使用`_read_wiki`函数和`_WikiTextDataset`类,我们定义了下面的`load_data_wiki`来下载并生成WikiText-2数据集,并从中生成预训练样本。\n"
]
},
{
"cell_type": "code",
"execution_count": 9,
"id": "9b484a88",
"metadata": {
"execution": {
"iopub.execute_input": "2023-08-18T07:00:41.197261Z",
"iopub.status.busy": "2023-08-18T07:00:41.196591Z",
"iopub.status.idle": "2023-08-18T07:00:41.202074Z",
"shell.execute_reply": "2023-08-18T07:00:41.201154Z"
},
"origin_pos": 24,
"tab": [
"pytorch"
]
},
"outputs": [],
"source": [
"#@save\n",
"def load_data_wiki(batch_size, max_len):\n",
" \"\"\"加载WikiText-2数据集\"\"\"\n",
" num_workers = d2l.get_dataloader_workers()\n",
" data_dir = d2l.download_extract('wikitext-2', 'wikitext-2')\n",
" paragraphs = _read_wiki(data_dir)\n",
" train_set = _WikiTextDataset(paragraphs, max_len)\n",
" train_iter = torch.utils.data.DataLoader(train_set, batch_size,\n",
" shuffle=True, num_workers=num_workers)\n",
" return train_iter, train_set.vocab"
]
},
{
"cell_type": "markdown",
"id": "74b59eb9",
"metadata": {
"origin_pos": 26
},
"source": [
"将批量大小设置为512,将BERT输入序列的最大长度设置为64,我们打印出小批量的BERT预训练样本的形状。注意,在每个BERT输入序列中,为遮蔽语言模型任务预测$10$($64 \\times 0.15$)个位置。\n"
]
},
{
"cell_type": "code",
"execution_count": 10,
"id": "f1a8e103",
"metadata": {
"execution": {
"iopub.execute_input": "2023-08-18T07:00:41.206083Z",
"iopub.status.busy": "2023-08-18T07:00:41.205815Z",
"iopub.status.idle": "2023-08-18T07:00:52.152614Z",
"shell.execute_reply": "2023-08-18T07:00:52.151321Z"
},
"origin_pos": 27,
"tab": [
"pytorch"
]
},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"Downloading ../data/wikitext-2-v1.zip from https://s3.amazonaws.com/research.metamind.io/wikitext/wikitext-2-v1.zip...\n"
]
},
{
"name": "stdout",
"output_type": "stream",
"text": [
"torch.Size([512, 64]) torch.Size([512, 64]) torch.Size([512]) torch.Size([512, 10]) torch.Size([512, 10]) torch.Size([512, 10]) torch.Size([512])\n"
]
}
],
"source": [
"batch_size, max_len = 512, 64\n",
"train_iter, vocab = load_data_wiki(batch_size, max_len)\n",
"\n",
"for (tokens_X, segments_X, valid_lens_x, pred_positions_X, mlm_weights_X,\n",
" mlm_Y, nsp_y) in train_iter:\n",
" print(tokens_X.shape, segments_X.shape, valid_lens_x.shape,\n",
" pred_positions_X.shape, mlm_weights_X.shape, mlm_Y.shape,\n",
" nsp_y.shape)\n",
" break"
]
},
{
"cell_type": "markdown",
"id": "c8b78dd7",
"metadata": {
"origin_pos": 28
},
"source": [
"最后,我们来看一下词量。即使在过滤掉不频繁的词元之后,它仍然比PTB数据集的大两倍以上。\n"
]
},
{
"cell_type": "code",
"execution_count": 11,
"id": "47b86684",
"metadata": {
"execution": {
"iopub.execute_input": "2023-08-18T07:00:52.159404Z",
"iopub.status.busy": "2023-08-18T07:00:52.158958Z",
"iopub.status.idle": "2023-08-18T07:00:52.169643Z",
"shell.execute_reply": "2023-08-18T07:00:52.168438Z"
},
"origin_pos": 29,
"tab": [
"pytorch"
]
},
"outputs": [
{
"data": {
"text/plain": [
"20256"
]
},
"execution_count": 11,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"len(vocab)"
]
},
{
"cell_type": "markdown",
"id": "081adbe2",
"metadata": {
"origin_pos": 30
},
"source": [
"## 小结\n",
"\n",
"* 与PTB数据集相比,WikiText-2数据集保留了原来的标点符号、大小写和数字,并且比PTB数据集大了两倍多。\n",
"* 我们可以任意访问从WikiText-2语料库中的一对句子生成的预训练(遮蔽语言模型和下一句预测)样本。\n",
"\n",
"## 练习\n",
"\n",
"1. 为简单起见,句号用作拆分句子的唯一分隔符。尝试其他的句子拆分技术,比如Spacy和NLTK。以NLTK为例,需要先安装NLTK`pip install nltk`。在代码中先`import nltk`。然后下载Punkt语句词元分析器:`nltk.download('punkt')`。要拆分句子,比如`sentences = 'This is great ! Why not ?'`,调用`nltk.tokenize.sent_tokenize(sentences)`将返回两个句子字符串的列表:`['This is great !', 'Why not ?']`。\n",
"1. 如果我们不过滤出一些不常见的词元,词量会有多大?\n"
]
},
{
"cell_type": "markdown",
"id": "cebcf3ae",
"metadata": {
"origin_pos": 32,
"tab": [
"pytorch"
]
},
"source": [
"[Discussions](https://discuss.d2l.ai/t/5738)\n"
]
}
],
"metadata": {
"language_info": {
"name": "python"
},
"required_libs": []
},
"nbformat": 4,
"nbformat_minor": 5
}
@@ -0,0 +1,654 @@
{
"cells": [
{
"cell_type": "markdown",
"id": "f498e022",
"metadata": {
"origin_pos": 0
},
"source": [
"# 来自Transformers的双向编码器表示(BERT\n",
":label:`sec_bert`\n",
"\n",
"我们已经介绍了几种用于自然语言理解的词嵌入模型。在预训练之后,输出可以被认为是一个矩阵,其中每一行都是一个表示预定义词表中词的向量。事实上,这些词嵌入模型都是与上下文无关的。让我们先来说明这个性质。\n",
"\n",
"## 从上下文无关到上下文敏感\n",
"\n",
"回想一下 :numref:`sec_word2vec_pretraining`和 :numref:`sec_synonyms`中的实验。例如,word2vec和GloVe都将相同的预训练向量分配给同一个词,而不考虑词的上下文(如果有的话)。形式上,任何词元$x$的上下文无关表示是函数$f(x)$,其仅将$x$作为其输入。考虑到自然语言中丰富的多义现象和复杂的语义,上下文无关表示具有明显的局限性。例如,在“a crane is flying”(一只鹤在飞)和“a crane driver came”(一名吊车司机来了)的上下文中,“crane”一词有完全不同的含义;因此,同一个词可以根据上下文被赋予不同的表示。\n",
"\n",
"这推动了“上下文敏感”词表示的发展,其中词的表征取决于它们的上下文。因此,词元$x$的上下文敏感表示是函数$f(x, c(x))$,其取决于$x$及其上下文$c(x)$。流行的上下文敏感表示包括TagLMlanguage-model-augmented sequence tagger,语言模型增强的序列标记器) :cite:`Peters.Ammar.Bhagavatula.ea.2017`、CoVeContext Vectors,上下文向量) :cite:`McCann.Bradbury.Xiong.ea.2017`和ELMoEmbeddings from Language Models,来自语言模型的嵌入) :cite:`Peters.Neumann.Iyyer.ea.2018`。\n",
"\n",
"例如,通过将整个序列作为输入,ELMo是为输入序列中的每个单词分配一个表示的函数。具体来说,ELMo将来自预训练的双向长短期记忆网络的所有中间层表示组合为输出表示。然后,ELMo的表示将作为附加特征添加到下游任务的现有监督模型中,例如通过将ELMo的表示和现有模型中词元的原始表示(例如GloVe)连结起来。一方面,在加入ELMo表示后,冻结了预训练的双向LSTM模型中的所有权重。另一方面,现有的监督模型是专门为给定的任务定制的。利用当时不同任务的不同最佳模型,添加ELMo改进了六种自然语言处理任务的技术水平:情感分析、自然语言推断、语义角色标注、共指消解、命名实体识别和问答。\n",
"\n",
"## 从特定于任务到不可知任务\n",
"\n",
"尽管ELMo显著改进了各种自然语言处理任务的解决方案,但每个解决方案仍然依赖于一个特定于任务的架构。然而,为每一个自然语言处理任务设计一个特定的架构实际上并不是一件容易的事。GPTGenerative Pre Training,生成式预训练)模型为上下文的敏感表示设计了通用的任务无关模型 :cite:`Radford.Narasimhan.Salimans.ea.2018`。GPT建立在Transformer解码器的基础上,预训练了一个用于表示文本序列的语言模型。当将GPT应用于下游任务时,语言模型的输出将被送到一个附加的线性输出层,以预测任务的标签。与ELMo冻结预训练模型的参数不同,GPT在下游任务的监督学习过程中对预训练Transformer解码器中的所有参数进行微调。GPT在自然语言推断、问答、句子相似性和分类等12项任务上进行了评估,并在对模型架构进行最小更改的情况下改善了其中9项任务的最新水平。\n",
"\n",
"然而,由于语言模型的自回归特性,GPT只能向前看(从左到右)。在“i went to the bank to deposit cash”(我去银行存现金)和“i went to the bank to sit down”(我去河岸边坐下)的上下文中,由于“bank”对其左边的上下文敏感,GPT将返回“bank”的相同表示,尽管它有不同的含义。\n",
"\n",
"## BERT:把两个最好的结合起来\n",
"\n",
"如我们所见,ELMo对上下文进行双向编码,但使用特定于任务的架构;而GPT是任务无关的,但是从左到右编码上下文。BERT(来自Transformers的双向编码器表示)结合了这两个方面的优点。它对上下文进行双向编码,并且对于大多数的自然语言处理任务 :cite:`Devlin.Chang.Lee.ea.2018`只需要最少的架构改变。通过使用预训练的Transformer编码器,BERT能够基于其双向上下文表示任何词元。在下游任务的监督学习过程中,BERT在两个方面与GPT相似。首先,BERT表示将被输入到一个添加的输出层中,根据任务的性质对模型架构进行最小的更改,例如预测每个词元与预测整个序列。其次,对预训练Transformer编码器的所有参数进行微调,而额外的输出层将从头开始训练。 :numref:`fig_elmo-gpt-bert` 描述了ELMo、GPT和BERT之间的差异。\n",
"\n",
"![ELMo、GPT和BERT的比较](../img/elmo-gpt-bert.svg)\n",
":label:`fig_elmo-gpt-bert`\n",
"\n",
"BERT进一步改进了11种自然语言处理任务的技术水平,这些任务分为以下几个大类:(1)单一文本分类(如情感分析)、(2)文本对分类(如自然语言推断)、(3)问答、(4)文本标记(如命名实体识别)。从上下文敏感的ELMo到任务不可知的GPT和BERT,它们都是在2018年提出的。概念上简单但经验上强大的自然语言深度表示预训练已经彻底改变了各种自然语言处理任务的解决方案。\n",
"\n",
"在本章的其余部分,我们将深入了解BERT的训练前准备。当在 :numref:`chap_nlp_app`中解释自然语言处理应用时,我们将说明针对下游应用的BERT微调。\n"
]
},
{
"cell_type": "code",
"execution_count": 1,
"id": "6042930c",
"metadata": {
"execution": {
"iopub.execute_input": "2023-08-18T06:58:04.580152Z",
"iopub.status.busy": "2023-08-18T06:58:04.579563Z",
"iopub.status.idle": "2023-08-18T06:58:06.551921Z",
"shell.execute_reply": "2023-08-18T06:58:06.551014Z"
},
"origin_pos": 2,
"tab": [
"pytorch"
]
},
"outputs": [],
"source": [
"import torch\n",
"from torch import nn\n",
"from d2l import torch as d2l"
]
},
{
"cell_type": "markdown",
"id": "b3092258",
"metadata": {
"origin_pos": 4
},
"source": [
"## 输入表示\n",
":label:`subsec_bert_input_rep`\n",
"\n",
"在自然语言处理中,有些任务(如情感分析)以单个文本作为输入,而有些任务(如自然语言推断)以一对文本序列作为输入。BERT输入序列明确地表示单个文本和文本对。当输入为单个文本时,BERT输入序列是特殊类别词元“&lt;cls&gt;”、文本序列的标记、以及特殊分隔词元“&lt;sep&gt;”的连结。当输入为文本对时,BERT输入序列是“&lt;cls&gt;”、第一个文本序列的标记、“&lt;sep&gt;”、第二个文本序列标记、以及“&lt;sep&gt;”的连结。我们将始终如一地将术语“BERT输入序列”与其他类型的“序列”区分开来。例如,一个*BERT输入序列*可以包括一个*文本序列*或两个*文本序列*。\n",
"\n",
"为了区分文本对,根据输入序列学到的片段嵌入$\\mathbf{e}_A$和$\\mathbf{e}_B$分别被添加到第一序列和第二序列的词元嵌入中。对于单文本输入,仅使用$\\mathbf{e}_A$。\n",
"\n",
"下面的`get_tokens_and_segments`将一个句子或两个句子作为输入,然后返回BERT输入序列的标记及其相应的片段索引。\n"
]
},
{
"cell_type": "code",
"execution_count": 2,
"id": "4e5d0098",
"metadata": {
"execution": {
"iopub.execute_input": "2023-08-18T06:58:06.556248Z",
"iopub.status.busy": "2023-08-18T06:58:06.555588Z",
"iopub.status.idle": "2023-08-18T06:58:06.561006Z",
"shell.execute_reply": "2023-08-18T06:58:06.560200Z"
},
"origin_pos": 5,
"tab": [
"pytorch"
]
},
"outputs": [],
"source": [
"#@save\n",
"def get_tokens_and_segments(tokens_a, tokens_b=None):\n",
" \"\"\"获取输入序列的词元及其片段索引\"\"\"\n",
" tokens = ['<cls>'] + tokens_a + ['<sep>']\n",
" # 0和1分别标记片段A和B\n",
" segments = [0] * (len(tokens_a) + 2)\n",
" if tokens_b is not None:\n",
" tokens += tokens_b + ['<sep>']\n",
" segments += [1] * (len(tokens_b) + 1)\n",
" return tokens, segments"
]
},
{
"cell_type": "markdown",
"id": "568f895f",
"metadata": {
"origin_pos": 6
},
"source": [
"BERT选择Transformer编码器作为其双向架构。在Transformer编码器中常见是,位置嵌入被加入到输入序列的每个位置。然而,与原始的Transformer编码器不同,BERT使用*可学习的*位置嵌入。总之, \n",
":numref:`fig_bert-input`表明BERT输入序列的嵌入是词元嵌入、片段嵌入和位置嵌入的和。\n",
"\n",
"![BERT输入序列的嵌入是词元嵌入、片段嵌入和位置嵌入的和](../img/bert-input.svg)\n",
":label:`fig_bert-input`\n",
"\n",
"下面的`BERTEncoder`类类似于 :numref:`sec_transformer`中实现的`TransformerEncoder`类。与`TransformerEncoder`不同,`BERTEncoder`使用片段嵌入和可学习的位置嵌入。\n"
]
},
{
"cell_type": "code",
"execution_count": 3,
"id": "6ad098c5",
"metadata": {
"execution": {
"iopub.execute_input": "2023-08-18T06:58:06.564603Z",
"iopub.status.busy": "2023-08-18T06:58:06.564068Z",
"iopub.status.idle": "2023-08-18T06:58:06.571897Z",
"shell.execute_reply": "2023-08-18T06:58:06.571098Z"
},
"origin_pos": 8,
"tab": [
"pytorch"
]
},
"outputs": [],
"source": [
"#@save\n",
"class BERTEncoder(nn.Module):\n",
" \"\"\"BERT编码器\"\"\"\n",
" def __init__(self, vocab_size, num_hiddens, norm_shape, ffn_num_input,\n",
" ffn_num_hiddens, num_heads, num_layers, dropout,\n",
" max_len=1000, key_size=768, query_size=768, value_size=768,\n",
" **kwargs):\n",
" super(BERTEncoder, self).__init__(**kwargs)\n",
" self.token_embedding = nn.Embedding(vocab_size, num_hiddens)\n",
" self.segment_embedding = nn.Embedding(2, num_hiddens)\n",
" self.blks = nn.Sequential()\n",
" for i in range(num_layers):\n",
" self.blks.add_module(f\"{i}\", d2l.EncoderBlock(\n",
" key_size, query_size, value_size, num_hiddens, norm_shape,\n",
" ffn_num_input, ffn_num_hiddens, num_heads, dropout, True))\n",
" # 在BERT中,位置嵌入是可学习的,因此我们创建一个足够长的位置嵌入参数\n",
" self.pos_embedding = nn.Parameter(torch.randn(1, max_len,\n",
" num_hiddens))\n",
"\n",
" def forward(self, tokens, segments, valid_lens):\n",
" # 在以下代码段中,X的形状保持不变:(批量大小,最大序列长度,num_hiddens\n",
" X = self.token_embedding(tokens) + self.segment_embedding(segments)\n",
" X = X + self.pos_embedding.data[:, :X.shape[1], :]\n",
" for blk in self.blks:\n",
" X = blk(X, valid_lens)\n",
" return X"
]
},
{
"cell_type": "markdown",
"id": "fd683c2c",
"metadata": {
"origin_pos": 10
},
"source": [
"假设词表大小为10000,为了演示`BERTEncoder`的前向推断,让我们创建一个实例并初始化它的参数。\n"
]
},
{
"cell_type": "code",
"execution_count": 4,
"id": "94237d14",
"metadata": {
"execution": {
"iopub.execute_input": "2023-08-18T06:58:06.575485Z",
"iopub.status.busy": "2023-08-18T06:58:06.574955Z",
"iopub.status.idle": "2023-08-18T06:58:06.758687Z",
"shell.execute_reply": "2023-08-18T06:58:06.757737Z"
},
"origin_pos": 12,
"tab": [
"pytorch"
]
},
"outputs": [],
"source": [
"vocab_size, num_hiddens, ffn_num_hiddens, num_heads = 10000, 768, 1024, 4\n",
"norm_shape, ffn_num_input, num_layers, dropout = [768], 768, 2, 0.2\n",
"encoder = BERTEncoder(vocab_size, num_hiddens, norm_shape, ffn_num_input,\n",
" ffn_num_hiddens, num_heads, num_layers, dropout)"
]
},
{
"cell_type": "markdown",
"id": "e69503ec",
"metadata": {
"origin_pos": 13
},
"source": [
"我们将`tokens`定义为长度为8的2个输入序列,其中每个词元是词表的索引。使用输入`tokens`的`BERTEncoder`的前向推断返回编码结果,其中每个词元由向量表示,其长度由超参数`num_hiddens`定义。此超参数通常称为Transformer编码器的*隐藏大小*(隐藏单元数)。\n"
]
},
{
"cell_type": "code",
"execution_count": 5,
"id": "57e87013",
"metadata": {
"execution": {
"iopub.execute_input": "2023-08-18T06:58:06.762791Z",
"iopub.status.busy": "2023-08-18T06:58:06.762204Z",
"iopub.status.idle": "2023-08-18T06:58:06.780913Z",
"shell.execute_reply": "2023-08-18T06:58:06.779803Z"
},
"origin_pos": 15,
"tab": [
"pytorch"
]
},
"outputs": [
{
"data": {
"text/plain": [
"torch.Size([2, 8, 768])"
]
},
"execution_count": 5,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"tokens = torch.randint(0, vocab_size, (2, 8))\n",
"segments = torch.tensor([[0, 0, 0, 0, 1, 1, 1, 1], [0, 0, 0, 1, 1, 1, 1, 1]])\n",
"encoded_X = encoder(tokens, segments, None)\n",
"encoded_X.shape"
]
},
{
"cell_type": "markdown",
"id": "768a42de",
"metadata": {
"origin_pos": 17
},
"source": [
"## 预训练任务\n",
":label:`subsec_bert_pretraining_tasks`\n",
"\n",
"`BERTEncoder`的前向推断给出了输入文本的每个词元和插入的特殊标记“&lt;cls&gt;”及“&lt;seq&gt;”的BERT表示。接下来,我们将使用这些表示来计算预训练BERT的损失函数。预训练包括以下两个任务:掩蔽语言模型和下一句预测。\n",
"\n",
"### 掩蔽语言模型(Masked Language Modeling\n",
":label:`subsec_mlm`\n",
"\n",
"如 :numref:`sec_language_model`所示,语言模型使用左侧的上下文预测词元。为了双向编码上下文以表示每个词元,BERT随机掩蔽词元并使用来自双向上下文的词元以自监督的方式预测掩蔽词元。此任务称为*掩蔽语言模型*。\n",
"\n",
"在这个预训练任务中,将随机选择15%的词元作为预测的掩蔽词元。要预测一个掩蔽词元而不使用标签作弊,一个简单的方法是总是用一个特殊的“&lt;mask&gt;”替换输入序列中的词元。然而,人造特殊词元“&lt;mask&gt;”不会出现在微调中。为了避免预训练和微调之间的这种不匹配,如果为预测而屏蔽词元(例如,在“this movie is great”中选择掩蔽和预测“great”),则在输入中将其替换为:\n",
"\n",
"* 80%时间为特殊的“&lt;mask&gt;“词元(例如,“this movie is great”变为“this movie is&lt;mask&gt;”;\n",
"* 10%时间为随机词元(例如,“this movie is great”变为“this movie is drink”);\n",
"* 10%时间内为不变的标签词元(例如,“this movie is great”变为“this movie is great”)。\n",
"\n",
"请注意,在15%的时间中,有10%的时间插入了随机词元。这种偶然的噪声鼓励BERT在其双向上下文编码中不那么偏向于掩蔽词元(尤其是当标签词元保持不变时)。\n",
"\n",
"我们实现了下面的`MaskLM`类来预测BERT预训练的掩蔽语言模型任务中的掩蔽标记。预测使用单隐藏层的多层感知机(`self.mlp`)。在前向推断中,它需要两个输入:`BERTEncoder`的编码结果和用于预测的词元位置。输出是这些位置的预测结果。\n"
]
},
{
"cell_type": "code",
"execution_count": 6,
"id": "dc98249b",
"metadata": {
"execution": {
"iopub.execute_input": "2023-08-18T06:58:06.786473Z",
"iopub.status.busy": "2023-08-18T06:58:06.785498Z",
"iopub.status.idle": "2023-08-18T06:58:06.795323Z",
"shell.execute_reply": "2023-08-18T06:58:06.794249Z"
},
"origin_pos": 19,
"tab": [
"pytorch"
]
},
"outputs": [],
"source": [
"#@save\n",
"class MaskLM(nn.Module):\n",
" \"\"\"BERT的掩蔽语言模型任务\"\"\"\n",
" def __init__(self, vocab_size, num_hiddens, num_inputs=768, **kwargs):\n",
" super(MaskLM, self).__init__(**kwargs)\n",
" self.mlp = nn.Sequential(nn.Linear(num_inputs, num_hiddens),\n",
" nn.ReLU(),\n",
" nn.LayerNorm(num_hiddens),\n",
" nn.Linear(num_hiddens, vocab_size))\n",
"\n",
" def forward(self, X, pred_positions):\n",
" num_pred_positions = pred_positions.shape[1]\n",
" pred_positions = pred_positions.reshape(-1)\n",
" batch_size = X.shape[0]\n",
" batch_idx = torch.arange(0, batch_size)\n",
" # 假设batch_size=2num_pred_positions=3\n",
" # 那么batch_idx是np.array[0,0,0,1,1,1]\n",
" batch_idx = torch.repeat_interleave(batch_idx, num_pred_positions)\n",
" masked_X = X[batch_idx, pred_positions]\n",
" masked_X = masked_X.reshape((batch_size, num_pred_positions, -1))\n",
" mlm_Y_hat = self.mlp(masked_X)\n",
" return mlm_Y_hat"
]
},
{
"cell_type": "markdown",
"id": "528b3d54",
"metadata": {
"origin_pos": 21
},
"source": [
"为了演示`MaskLM`的前向推断,我们创建了其实例`mlm`并对其进行了初始化。回想一下,来自`BERTEncoder`的正向推断`encoded_X`表示2个BERT输入序列。我们将`mlm_positions`定义为在`encoded_X`的任一输入序列中预测的3个指示。`mlm`的前向推断返回`encoded_X`的所有掩蔽位置`mlm_positions`处的预测结果`mlm_Y_hat`。对于每个预测,结果的大小等于词表的大小。\n"
]
},
{
"cell_type": "code",
"execution_count": 7,
"id": "ef4b0b28",
"metadata": {
"execution": {
"iopub.execute_input": "2023-08-18T06:58:06.800348Z",
"iopub.status.busy": "2023-08-18T06:58:06.799558Z",
"iopub.status.idle": "2023-08-18T06:58:06.905961Z",
"shell.execute_reply": "2023-08-18T06:58:06.905018Z"
},
"origin_pos": 23,
"tab": [
"pytorch"
]
},
"outputs": [
{
"data": {
"text/plain": [
"torch.Size([2, 3, 10000])"
]
},
"execution_count": 7,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"mlm = MaskLM(vocab_size, num_hiddens)\n",
"mlm_positions = torch.tensor([[1, 5, 2], [6, 1, 5]])\n",
"mlm_Y_hat = mlm(encoded_X, mlm_positions)\n",
"mlm_Y_hat.shape"
]
},
{
"cell_type": "markdown",
"id": "ac8fc7ae",
"metadata": {
"origin_pos": 25
},
"source": [
"通过掩码下的预测词元`mlm_Y`的真实标签`mlm_Y_hat`,我们可以计算在BERT预训练中的遮蔽语言模型任务的交叉熵损失。\n"
]
},
{
"cell_type": "code",
"execution_count": 8,
"id": "ace75d78",
"metadata": {
"execution": {
"iopub.execute_input": "2023-08-18T06:58:06.910802Z",
"iopub.status.busy": "2023-08-18T06:58:06.910165Z",
"iopub.status.idle": "2023-08-18T06:58:06.918066Z",
"shell.execute_reply": "2023-08-18T06:58:06.917108Z"
},
"origin_pos": 27,
"tab": [
"pytorch"
]
},
"outputs": [
{
"data": {
"text/plain": [
"torch.Size([6])"
]
},
"execution_count": 8,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"mlm_Y = torch.tensor([[7, 8, 9], [10, 20, 30]])\n",
"loss = nn.CrossEntropyLoss(reduction='none')\n",
"mlm_l = loss(mlm_Y_hat.reshape((-1, vocab_size)), mlm_Y.reshape(-1))\n",
"mlm_l.shape"
]
},
{
"cell_type": "markdown",
"id": "8ad54b6d",
"metadata": {
"origin_pos": 29
},
"source": [
"### 下一句预测(Next Sentence Prediction\n",
":label:`subsec_nsp`\n",
"\n",
"尽管掩蔽语言建模能够编码双向上下文来表示单词,但它不能显式地建模文本对之间的逻辑关系。为了帮助理解两个文本序列之间的关系,BERT在预训练中考虑了一个二元分类任务——*下一句预测*。在为预训练生成句子对时,有一半的时间它们确实是标签为“真”的连续句子;在另一半的时间里,第二个句子是从语料库中随机抽取的,标记为“假”。\n",
"\n",
"下面的`NextSentencePred`类使用单隐藏层的多层感知机来预测第二个句子是否是BERT输入序列中第一个句子的下一个句子。由于Transformer编码器中的自注意力,特殊词元“&lt;cls&gt;”的BERT表示已经对输入的两个句子进行了编码。因此,多层感知机分类器的输出层(`self.output`)以`X`作为输入,其中`X`是多层感知机隐藏层的输出,而MLP隐藏层的输入是编码后的“&lt;cls&gt;”词元。\n"
]
},
{
"cell_type": "code",
"execution_count": 9,
"id": "1d7be502",
"metadata": {
"execution": {
"iopub.execute_input": "2023-08-18T06:58:06.922549Z",
"iopub.status.busy": "2023-08-18T06:58:06.921958Z",
"iopub.status.idle": "2023-08-18T06:58:06.927273Z",
"shell.execute_reply": "2023-08-18T06:58:06.926309Z"
},
"origin_pos": 31,
"tab": [
"pytorch"
]
},
"outputs": [],
"source": [
"#@save\n",
"class NextSentencePred(nn.Module):\n",
" \"\"\"BERT的下一句预测任务\"\"\"\n",
" def __init__(self, num_inputs, **kwargs):\n",
" super(NextSentencePred, self).__init__(**kwargs)\n",
" self.output = nn.Linear(num_inputs, 2)\n",
"\n",
" def forward(self, X):\n",
" # X的形状:(batchsize,num_hiddens)\n",
" return self.output(X)"
]
},
{
"cell_type": "markdown",
"id": "e89c6890",
"metadata": {
"origin_pos": 33
},
"source": [
"我们可以看到,`NextSentencePred`实例的前向推断返回每个BERT输入序列的二分类预测。\n"
]
},
{
"cell_type": "code",
"execution_count": 10,
"id": "4542505a",
"metadata": {
"execution": {
"iopub.execute_input": "2023-08-18T06:58:06.932297Z",
"iopub.status.busy": "2023-08-18T06:58:06.931348Z",
"iopub.status.idle": "2023-08-18T06:58:06.939874Z",
"shell.execute_reply": "2023-08-18T06:58:06.938907Z"
},
"origin_pos": 35,
"tab": [
"pytorch"
]
},
"outputs": [
{
"data": {
"text/plain": [
"torch.Size([2, 2])"
]
},
"execution_count": 10,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"encoded_X = torch.flatten(encoded_X, start_dim=1)\n",
"# NSP的输入形状:(batchsizenum_hiddens)\n",
"nsp = NextSentencePred(encoded_X.shape[-1])\n",
"nsp_Y_hat = nsp(encoded_X)\n",
"nsp_Y_hat.shape"
]
},
{
"cell_type": "markdown",
"id": "d8acdafa",
"metadata": {
"origin_pos": 37
},
"source": [
"还可以计算两个二元分类的交叉熵损失。\n"
]
},
{
"cell_type": "code",
"execution_count": 11,
"id": "aaf7a84c",
"metadata": {
"execution": {
"iopub.execute_input": "2023-08-18T06:58:06.944820Z",
"iopub.status.busy": "2023-08-18T06:58:06.944049Z",
"iopub.status.idle": "2023-08-18T06:58:06.951717Z",
"shell.execute_reply": "2023-08-18T06:58:06.950547Z"
},
"origin_pos": 39,
"tab": [
"pytorch"
]
},
"outputs": [
{
"data": {
"text/plain": [
"torch.Size([2])"
]
},
"execution_count": 11,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"nsp_y = torch.tensor([0, 1])\n",
"nsp_l = loss(nsp_Y_hat, nsp_y)\n",
"nsp_l.shape"
]
},
{
"cell_type": "markdown",
"id": "2d605fb6",
"metadata": {
"origin_pos": 41
},
"source": [
"值得注意的是,上述两个预训练任务中的所有标签都可以从预训练语料库中获得,而无需人工标注。原始的BERT已经在图书语料库 :cite:`Zhu.Kiros.Zemel.ea.2015`和英文维基百科的连接上进行了预训练。这两个文本语料库非常庞大:它们分别有8亿个单词和25亿个单词。\n",
"\n",
"## 整合代码\n",
"\n",
"在预训练BERT时,最终的损失函数是掩蔽语言模型损失函数和下一句预测损失函数的线性组合。现在我们可以通过实例化三个类`BERTEncoder`、`MaskLM`和`NextSentencePred`来定义`BERTModel`类。前向推断返回编码后的BERT表示`encoded_X`、掩蔽语言模型预测`mlm_Y_hat`和下一句预测`nsp_Y_hat`。\n"
]
},
{
"cell_type": "code",
"execution_count": 12,
"id": "e5c5acd6",
"metadata": {
"execution": {
"iopub.execute_input": "2023-08-18T06:58:06.956805Z",
"iopub.status.busy": "2023-08-18T06:58:06.955956Z",
"iopub.status.idle": "2023-08-18T06:58:06.966697Z",
"shell.execute_reply": "2023-08-18T06:58:06.965474Z"
},
"origin_pos": 43,
"tab": [
"pytorch"
]
},
"outputs": [],
"source": [
"#@save\n",
"class BERTModel(nn.Module):\n",
" \"\"\"BERT模型\"\"\"\n",
" def __init__(self, vocab_size, num_hiddens, norm_shape, ffn_num_input,\n",
" ffn_num_hiddens, num_heads, num_layers, dropout,\n",
" max_len=1000, key_size=768, query_size=768, value_size=768,\n",
" hid_in_features=768, mlm_in_features=768,\n",
" nsp_in_features=768):\n",
" super(BERTModel, self).__init__()\n",
" self.encoder = BERTEncoder(vocab_size, num_hiddens, norm_shape,\n",
" ffn_num_input, ffn_num_hiddens, num_heads, num_layers,\n",
" dropout, max_len=max_len, key_size=key_size,\n",
" query_size=query_size, value_size=value_size)\n",
" self.hidden = nn.Sequential(nn.Linear(hid_in_features, num_hiddens),\n",
" nn.Tanh())\n",
" self.mlm = MaskLM(vocab_size, num_hiddens, mlm_in_features)\n",
" self.nsp = NextSentencePred(nsp_in_features)\n",
"\n",
" def forward(self, tokens, segments, valid_lens=None,\n",
" pred_positions=None):\n",
" encoded_X = self.encoder(tokens, segments, valid_lens)\n",
" if pred_positions is not None:\n",
" mlm_Y_hat = self.mlm(encoded_X, pred_positions)\n",
" else:\n",
" mlm_Y_hat = None\n",
" # 用于下一句预测的多层感知机分类器的隐藏层,0是“<cls>”标记的索引\n",
" nsp_Y_hat = self.nsp(self.hidden(encoded_X[:, 0, :]))\n",
" return encoded_X, mlm_Y_hat, nsp_Y_hat"
]
},
{
"cell_type": "markdown",
"id": "6d17579c",
"metadata": {
"origin_pos": 45
},
"source": [
"## 小结\n",
"\n",
"* word2vec和GloVe等词嵌入模型与上下文无关。它们将相同的预训练向量赋给同一个词,而不考虑词的上下文(如果有的话)。它们很难处理好自然语言中的一词多义或复杂语义。\n",
"* 对于上下文敏感的词表示,如ELMo和GPT,词的表示依赖于它们的上下文。\n",
"* ELMo对上下文进行双向编码,但使用特定于任务的架构(然而,为每个自然语言处理任务设计一个特定的体系架构实际上并不容易);而GPT是任务无关的,但是从左到右编码上下文。\n",
"* BERT结合了这两个方面的优点:它对上下文进行双向编码,并且需要对大量自然语言处理任务进行最小的架构更改。\n",
"* BERT输入序列的嵌入是词元嵌入、片段嵌入和位置嵌入的和。\n",
"* 预训练包括两个任务:掩蔽语言模型和下一句预测。前者能够编码双向上下文来表示单词,而后者则显式地建模文本对之间的逻辑关系。\n",
"\n",
"## 练习\n",
"\n",
"1. 为什么BERT成功了?\n",
"1. 在所有其他条件相同的情况下,掩蔽语言模型比从左到右的语言模型需要更多或更少的预训练步骤来收敛吗?为什么?\n",
"1. 在BERT的原始实现中,`BERTEncoder`中的位置前馈网络(通过`d2l.EncoderBlock`)和`MaskLM`中的全连接层都使用高斯误差线性单元(Gaussian error linear unitGELU :cite:`Hendrycks.Gimpel.2016`作为激活函数。研究GELU与ReLU之间的差异。\n"
]
},
{
"cell_type": "markdown",
"id": "6a271d51",
"metadata": {
"origin_pos": 47,
"tab": [
"pytorch"
]
},
"source": [
"[Discussions](https://discuss.d2l.ai/t/5750)\n"
]
}
],
"metadata": {
"language_info": {
"name": "python"
},
"required_libs": []
},
"nbformat": 4,
"nbformat_minor": 5
}
@@ -0,0 +1,118 @@
{
"cells": [
{
"cell_type": "markdown",
"id": "1aa10e3e",
"metadata": {
"origin_pos": 0
},
"source": [
"# 全局向量的词嵌入(GloVe\n",
":label:`sec_glove`\n",
"\n",
"上下文窗口内的词共现可以携带丰富的语义信息。例如,在一个大型语料库中,“固体”比“气体”更有可能与“冰”共现,但“气体”一词与“蒸汽”的共现频率可能比与“冰”的共现频率更高。此外,可以预先计算此类共现的全局语料库统计数据:这可以提高训练效率。为了利用整个语料库中的统计信息进行词嵌入,让我们首先回顾 :numref:`subsec_skip-gram`中的跳元模型,但是使用全局语料库统计(如共现计数)来解释它。\n",
"\n",
"## 带全局语料统计的跳元模型\n",
":label:`subsec_skipgram-global`\n",
"\n",
"用$q_{ij}$表示词$w_j$的条件概率$P(w_j\\mid w_i)$,在跳元模型中给定词$w_i$,我们有:\n",
"\n",
"$$q_{ij}=\\frac{\\exp(\\mathbf{u}_j^\\top \\mathbf{v}_i)}{ \\sum_{k \\in \\mathcal{V}} \\text{exp}(\\mathbf{u}_k^\\top \\mathbf{v}_i)},$$\n",
"\n",
"其中,对于任意索引$i$,向量$\\mathbf{v}_i$和$\\mathbf{u}_i$分别表示词$w_i$作为中心词和上下文词,且$\\mathcal{V} = \\{0, 1, \\ldots, |\\mathcal{V}|-1\\}$是词表的索引集。\n",
"\n",
"考虑词$w_i$可能在语料库中出现多次。在整个语料库中,所有以$w_i$为中心词的上下文词形成一个词索引的*多重集*$\\mathcal{C}_i$,该索引允许同一元素的多个实例。对于任何元素,其实例数称为其*重数*。举例说明,假设词$w_i$在语料库中出现两次,并且在两个上下文窗口中以$w_i$为其中心词的上下文词索引是$k, j, m, k$和$k, l, k, j$。因此,多重集$\\mathcal{C}_i = \\{j, j, k, k, k, k, l, m\\}$,其中元素$j, k, l, m$的重数分别为2、4、1、1。\n",
"\n",
"现在,让我们将多重集$\\mathcal{C}_i$中的元素$j$的重数表示为$x_{ij}$。这是词$w_j$(作为上下文词)和词$w_i$(作为中心词)在整个语料库的同一上下文窗口中的全局共现计数。使用这样的全局语料库统计,跳元模型的损失函数等价于:\n",
"\n",
"$$-\\sum_{i\\in\\mathcal{V}}\\sum_{j\\in\\mathcal{V}} x_{ij} \\log\\,q_{ij}.$$\n",
":eqlabel:`eq_skipgram-x_ij`\n",
"\n",
"我们用$x_i$表示上下文窗口中的所有上下文词的数量,其中$w_i$作为它们的中心词出现,这相当于$|\\mathcal{C}_i|$。设$p_{ij}$为用于生成上下文词$w_j$的条件概率$x_{ij}/x_i$。给定中心词$w_i$ :eqref:`eq_skipgram-x_ij`可以重写为:\n",
"\n",
"$$-\\sum_{i\\in\\mathcal{V}} x_i \\sum_{j\\in\\mathcal{V}} p_{ij} \\log\\,q_{ij}.$$\n",
":eqlabel:`eq_skipgram-p_ij`\n",
"\n",
"在 :eqref:`eq_skipgram-p_ij`中,$-\\sum_{j\\in\\mathcal{V}} p_{ij} \\log\\,q_{ij}$计算全局语料统计的条件分布$p_{ij}$和模型预测的条件分布$q_{ij}$的交叉熵。如上所述,这一损失也按$x_i$加权。在 :eqref:`eq_skipgram-p_ij`中最小化损失函数将使预测的条件分布接近全局语料库统计中的条件分布。\n",
"\n",
"虽然交叉熵损失函数通常用于测量概率分布之间的距离,但在这里可能不是一个好的选择。一方面,正如我们在 :numref:`sec_approx_train`中提到的,规范化$q_{ij}$的代价在于整个词表的求和,这在计算上可能非常昂贵。另一方面,来自大型语料库的大量罕见事件往往被交叉熵损失建模,从而赋予过多的权重。\n",
"\n",
"## GloVe模型\n",
"\n",
"有鉴于此,*GloVe*模型基于平方损失 :cite:`Pennington.Socher.Manning.2014`对跳元模型做了三个修改:\n",
"\n",
"1. 使用变量$p'_{ij}=x_{ij}$和$q'_{ij}=\\exp(\\mathbf{u}_j^\\top \\mathbf{v}_i)$\n",
"而非概率分布,并取两者的对数。所以平方损失项是$\\left(\\log\\,p'_{ij} - \\log\\,q'_{ij}\\right)^2 = \\left(\\mathbf{u}_j^\\top \\mathbf{v}_i - \\log\\,x_{ij}\\right)^2$。\n",
"2. 为每个词$w_i$添加两个标量模型参数:中心词偏置$b_i$和上下文词偏置$c_i$。\n",
"3. 用权重函数$h(x_{ij})$替换每个损失项的权重,其中$h(x)$在$[0, 1]$的间隔内递增。\n",
"\n",
"整合代码,训练GloVe是为了尽量降低以下损失函数:\n",
"\n",
"$$\\sum_{i\\in\\mathcal{V}} \\sum_{j\\in\\mathcal{V}} h(x_{ij}) \\left(\\mathbf{u}_j^\\top \\mathbf{v}_i + b_i + c_j - \\log\\,x_{ij}\\right)^2.$$\n",
":eqlabel:`eq_glove-loss`\n",
"\n",
"对于权重函数,建议的选择是:当$x < c$(例如,$c = 100$)时,$h(x) = (x/c) ^\\alpha$(例如$\\alpha = 0.75$);否则$h(x) = 1$。在这种情况下,由于$h(0)=0$,为了提高计算效率,可以省略任意$x_{ij}=0$的平方损失项。例如,当使用小批量随机梯度下降进行训练时,在每次迭代中,我们随机抽样一小批量*非零*的$x_{ij}$来计算梯度并更新模型参数。注意,这些非零的$x_{ij}$是预先计算的全局语料库统计数据;因此,该模型GloVe被称为*全局向量*。\n",
"\n",
"应该强调的是,当词$w_i$出现在词$w_j$的上下文窗口时,词$w_j$也出现在词$w_i$的上下文窗口。因此,$x_{ij}=x_{ji}$。与拟合非对称条件概率$p_{ij}$的word2vec不同,GloVe拟合对称概率$\\log \\, x_{ij}$。因此,在GloVe模型中,任意词的中心词向量和上下文词向量在数学上是等价的。但在实际应用中,由于初始值不同,同一个词经过训练后,在这两个向量中可能得到不同的值:GloVe将它们相加作为输出向量。\n",
"\n",
"## 从条件概率比值理解GloVe模型\n",
"\n",
"我们也可以从另一个角度来理解GloVe模型。使用 :numref:`subsec_skipgram-global`中的相同符号,设$p_{ij} \\stackrel{\\mathrm{def}}{=} P(w_j \\mid w_i)$为生成上下文词$w_j$的条件概率,给定$w_i$作为语料库中的中心词。 :numref:`tab_glove`根据大量语料库的统计数据,列出了给定单词“ice”和“steam”的共现概率及其比值。\n",
"\n",
"大型语料库中的词-词共现概率及其比值(根据 :cite:`Pennington.Socher.Manning.2014`中的表1改编)\n",
"\n",
"|$w_k$=|solid|gas|water|fashion|\n",
"|:--|:-|:-|:-|:-|\n",
"|$p_1=P(w_k\\mid \\text{ice})$|0.00019|0.000066|0.003|0.000017|\n",
"|$p_2=P(w_k\\mid\\text{steam})$|0.000022|0.00078|0.0022|0.000018|\n",
"|$p_1/p_2$|8.9|0.085|1.36|0.96|\n",
":label:`tab_glove`\n",
"\n",
"从 :numref:`tab_glove`中,我们可以观察到以下几点:\n",
"\n",
"* 对于与“ice”相关但与“steam”无关的单词$w_k$,例如$w_k=\\text{solid}$,我们预计会有更大的共现概率比值,例如8.9。\n",
"* 对于与“steam”相关但与“ice”无关的单词$w_k$,例如$w_k=\\text{gas}$,我们预计较小的共现概率比值,例如0.085。\n",
"* 对于同时与“ice”和“steam”相关的单词$w_k$,例如$w_k=\\text{water}$,我们预计其共现概率的比值接近1,例如1.36.\n",
"* 对于与“ice”和“steam”都不相关的单词$w_k$,例如$w_k=\\text{fashion}$,我们预计共现概率的比值接近1,例如0.96.\n",
"\n",
"由此可见,共现概率的比值能够直观地表达词与词之间的关系。因此,我们可以设计三个词向量的函数来拟合这个比值。对于共现概率${p_{ij}}/{p_{ik}}$的比值,其中$w_i$是中心词,$w_j$和$w_k$是上下文词,我们希望使用某个函数$f$来拟合该比值:\n",
"\n",
"$$f(\\mathbf{u}_j, \\mathbf{u}_k, {\\mathbf{v}}_i) \\approx \\frac{p_{ij}}{p_{ik}}.$$\n",
":eqlabel:`eq_glove-f`\n",
"\n",
"在$f$的许多可能的设计中,我们只在以下几点中选择了一个合理的选择。因为共现概率的比值是标量,所以我们要求$f$是标量函数,例如$f(\\mathbf{u}_j, \\mathbf{u}_k, {\\mathbf{v}}_i) = f\\left((\\mathbf{u}_j - \\mathbf{u}_k)^\\top {\\mathbf{v}}_i\\right)$。在 :eqref:`eq_glove-f`中交换词索引$j$和$k$,它必须保持$f(x)f(-x)=1$,所以一种可能性是$f(x)=\\exp(x)$,即:\n",
"\n",
"$$f(\\mathbf{u}_j, \\mathbf{u}_k, {\\mathbf{v}}_i) = \\frac{\\exp\\left(\\mathbf{u}_j^\\top {\\mathbf{v}}_i\\right)}{\\exp\\left(\\mathbf{u}_k^\\top {\\mathbf{v}}_i\\right)} \\approx \\frac{p_{ij}}{p_{ik}}.$$\n",
"\n",
"现在让我们选择$\\exp\\left(\\mathbf{u}_j^\\top {\\mathbf{v}}_i\\right) \\approx \\alpha p_{ij}$,其中$\\alpha$是常数。从$p_{ij}=x_{ij}/x_i$开始,取两边的对数得到$\\mathbf{u}_j^\\top {\\mathbf{v}}_i \\approx \\log\\,\\alpha + \\log\\,x_{ij} - \\log\\,x_i$。我们可以使用附加的偏置项来拟合$- \\log\\, \\alpha + \\log\\, x_i$,如中心词偏置$b_i$和上下文词偏置$c_j$\n",
"\n",
"$$\\mathbf{u}_j^\\top \\mathbf{v}_i + b_i + c_j \\approx \\log\\, x_{ij}.$$\n",
":eqlabel:`eq_glove-square`\n",
"\n",
"通过对 :eqref:`eq_glove-square`的加权平方误差的度量,得到了 :eqref:`eq_glove-loss`的GloVe损失函数。\n",
"\n",
"## 小结\n",
"\n",
"* 诸如词-词共现计数的全局语料库统计可以来解释跳元模型。\n",
"* 交叉熵损失可能不是衡量两种概率分布差异的好选择,特别是对于大型语料库。GloVe使用平方损失来拟合预先计算的全局语料库统计数据。\n",
"* 对于GloVe中的任意词,中心词向量和上下文词向量在数学上是等价的。\n",
"* GloVe可以从词-词共现概率的比率来解释。\n",
"\n",
"## 练习\n",
"\n",
"1. 如果词$w_i$和$w_j$在同一上下文窗口中同时出现,我们如何使用它们在文本序列中的距离来重新设计计算条件概率$p_{ij}$的方法?提示:参见GloVe论文 :cite:`Pennington.Socher.Manning.2014`的第4.2节。\n",
"1. 对于任何一个词,它的中心词偏置和上下文偏置在数学上是等价的吗?为什么?\n",
"\n",
"[Discussions](https://discuss.d2l.ai/t/5736)\n"
]
}
],
"metadata": {
"language_info": {
"name": "python"
},
"required_libs": []
},
"nbformat": 4,
"nbformat_minor": 5
}
@@ -0,0 +1,73 @@
{
"cells": [
{
"cell_type": "markdown",
"id": "a820fc25",
"metadata": {
"origin_pos": 0
},
"source": [
"# 自然语言处理:预训练\n",
":label:`chap_nlp_pretrain`\n",
"\n",
"人与人之间需要交流。\n",
"出于人类这种基本需要,每天都有大量的书面文本产生。\n",
"比如,社交媒体、聊天应用、电子邮件、产品评论、新闻文章、\n",
"研究论文和书籍中的丰富文本,\n",
"使计算机能够理解它们以提供帮助或基于人类语言做出决策变得至关重要。\n",
"\n",
"*自然语言处理*是指研究使用自然语言的计算机和人类之间的交互。\n",
"在实践中,使用自然语言处理技术来处理和分析文本数据是非常常见的,\n",
"例如 :numref:`sec_language_model`的语言模型\n",
"和 :numref:`sec_machine_translation`的机器翻译模型。\n",
"\n",
"要理解文本,我们可以从学习它的表示开始。\n",
"利用来自大型语料库的现有文本序列,\n",
"*自监督学习*self-supervised learning\n",
"已被广泛用于预训练文本表示,\n",
"例如通过使用周围文本的其它部分来预测文本的隐藏部分。\n",
"通过这种方式,模型可以通过有监督地从*海量*文本数据中学习,而不需要*昂贵*的标签标注!\n",
"\n",
"本章我们将看到:当将每个单词或子词视为单个词元时,\n",
"可以在大型语料库上使用word2vec、GloVe或子词嵌入模型预先训练每个词元的词元。\n",
"经过预训练后,每个词元的表示可以是一个向量。\n",
"但是,无论上下文是什么,它都保持不变。\n",
"例如,“bank”(可以译作银行或者河岸)的向量表示在\n",
"“go to the bank to deposit some money”(去银行存点钱)\n",
"和“go to the bank to sit down”(去河岸坐下来)中是相同的。\n",
"因此,许多较新的预训练模型使相同词元的表示适应于不同的上下文,\n",
"其中包括基于Transformer编码器的更深的自监督模型BERT。\n",
"在本章中,我们将重点讨论如何预训练文本的这种表示,\n",
"如 :numref:`fig_nlp-map-pretrain`中所强调的那样。\n",
"\n",
"![预训练好的文本表示可以放入各种深度学习架构,应用于不同自然语言处理任务(本章主要研究上游文本的预训练)](../img/nlp-map-pretrain.svg)\n",
":label:`fig_nlp-map-pretrain`\n",
"\n",
" :numref:`fig_nlp-map-pretrain`显示了\n",
"预训练好的文本表示可以放入各种深度学习架构,应用于不同自然语言处理任务。\n",
"我们将在 :numref:`chap_nlp_app`中介绍它们。\n",
"\n",
":begin_tab:toc\n",
" - [word2vec](word2vec.ipynb)\n",
" - [approx-training](approx-training.ipynb)\n",
" - [word-embedding-dataset](word-embedding-dataset.ipynb)\n",
" - [word2vec-pretraining](word2vec-pretraining.ipynb)\n",
" - [glove](glove.ipynb)\n",
" - [subword-embedding](subword-embedding.ipynb)\n",
" - [similarity-analogy](similarity-analogy.ipynb)\n",
" - [bert](bert.ipynb)\n",
" - [bert-dataset](bert-dataset.ipynb)\n",
" - [bert-pretraining](bert-pretraining.ipynb)\n",
":end_tab:\n"
]
}
],
"metadata": {
"language_info": {
"name": "python"
},
"required_libs": []
},
"nbformat": 4,
"nbformat_minor": 5
}
@@ -0,0 +1,718 @@
{
"cells": [
{
"cell_type": "markdown",
"id": "0160c8de",
"metadata": {
"origin_pos": 0
},
"source": [
"# 词的相似性和类比任务\n",
":label:`sec_synonyms`\n",
"\n",
"在 :numref:`sec_word2vec_pretraining`中,我们在一个小的数据集上训练了一个word2vec模型,并使用它为一个输入词寻找语义相似的词。实际上,在大型语料库上预先训练的词向量可以应用于下游的自然语言处理任务,这将在后面的 :numref:`chap_nlp_app`中讨论。为了直观地演示大型语料库中预训练词向量的语义,让我们将预训练词向量应用到词的相似性和类比任务中。\n"
]
},
{
"cell_type": "code",
"execution_count": 1,
"id": "f23dc33a",
"metadata": {
"execution": {
"iopub.execute_input": "2023-08-18T07:06:41.256400Z",
"iopub.status.busy": "2023-08-18T07:06:41.255749Z",
"iopub.status.idle": "2023-08-18T07:06:43.288113Z",
"shell.execute_reply": "2023-08-18T07:06:43.287240Z"
},
"origin_pos": 2,
"tab": [
"pytorch"
]
},
"outputs": [],
"source": [
"import os\n",
"import torch\n",
"from torch import nn\n",
"from d2l import torch as d2l"
]
},
{
"cell_type": "markdown",
"id": "ce6db3d6",
"metadata": {
"origin_pos": 4
},
"source": [
"## 加载预训练词向量\n",
"\n",
"以下列出维度为50、100和300的预训练GloVe嵌入,可从[GloVe网站](https://nlp.stanford.edu/projects/glove/)下载。预训练的fastText嵌入有多种语言。这里我们使用可以从[fastText网站](https://fasttext.cc/)下载300维度的英文版本(“wiki.en”)。\n"
]
},
{
"cell_type": "code",
"execution_count": 2,
"id": "89f705ca",
"metadata": {
"execution": {
"iopub.execute_input": "2023-08-18T07:06:43.292543Z",
"iopub.status.busy": "2023-08-18T07:06:43.291837Z",
"iopub.status.idle": "2023-08-18T07:06:43.297097Z",
"shell.execute_reply": "2023-08-18T07:06:43.296299Z"
},
"origin_pos": 5,
"tab": [
"pytorch"
]
},
"outputs": [],
"source": [
"#@save\n",
"d2l.DATA_HUB['glove.6b.50d'] = (d2l.DATA_URL + 'glove.6B.50d.zip',\n",
" '0b8703943ccdb6eb788e6f091b8946e82231bc4d')\n",
"\n",
"#@save\n",
"d2l.DATA_HUB['glove.6b.100d'] = (d2l.DATA_URL + 'glove.6B.100d.zip',\n",
" 'cd43bfb07e44e6f27cbcc7bc9ae3d80284fdaf5a')\n",
"\n",
"#@save\n",
"d2l.DATA_HUB['glove.42b.300d'] = (d2l.DATA_URL + 'glove.42B.300d.zip',\n",
" 'b5116e234e9eb9076672cfeabf5469f3eec904fa')\n",
"\n",
"#@save\n",
"d2l.DATA_HUB['wiki.en'] = (d2l.DATA_URL + 'wiki.en.zip',\n",
" 'c1816da3821ae9f43899be655002f6c723e91b88')"
]
},
{
"cell_type": "markdown",
"id": "8368bbae",
"metadata": {
"origin_pos": 6
},
"source": [
"为了加载这些预训练的GloVe和fastText嵌入,我们定义了以下`TokenEmbedding`类。\n"
]
},
{
"cell_type": "code",
"execution_count": 3,
"id": "cd54118c",
"metadata": {
"execution": {
"iopub.execute_input": "2023-08-18T07:06:43.300883Z",
"iopub.status.busy": "2023-08-18T07:06:43.300205Z",
"iopub.status.idle": "2023-08-18T07:06:43.309328Z",
"shell.execute_reply": "2023-08-18T07:06:43.308481Z"
},
"origin_pos": 7,
"tab": [
"pytorch"
]
},
"outputs": [],
"source": [
"#@save\n",
"class TokenEmbedding:\n",
" \"\"\"GloVe嵌入\"\"\"\n",
" def __init__(self, embedding_name):\n",
" self.idx_to_token, self.idx_to_vec = self._load_embedding(\n",
" embedding_name)\n",
" self.unknown_idx = 0\n",
" self.token_to_idx = {token: idx for idx, token in\n",
" enumerate(self.idx_to_token)}\n",
"\n",
" def _load_embedding(self, embedding_name):\n",
" idx_to_token, idx_to_vec = ['<unk>'], []\n",
" data_dir = d2l.download_extract(embedding_name)\n",
" # GloVe网站:https://nlp.stanford.edu/projects/glove/\n",
" # fastText网站:https://fasttext.cc/\n",
" with open(os.path.join(data_dir, 'vec.txt'), 'r') as f:\n",
" for line in f:\n",
" elems = line.rstrip().split(' ')\n",
" token, elems = elems[0], [float(elem) for elem in elems[1:]]\n",
" # 跳过标题信息,例如fastText中的首行\n",
" if len(elems) > 1:\n",
" idx_to_token.append(token)\n",
" idx_to_vec.append(elems)\n",
" idx_to_vec = [[0] * len(idx_to_vec[0])] + idx_to_vec\n",
" return idx_to_token, torch.tensor(idx_to_vec)\n",
"\n",
" def __getitem__(self, tokens):\n",
" indices = [self.token_to_idx.get(token, self.unknown_idx)\n",
" for token in tokens]\n",
" vecs = self.idx_to_vec[torch.tensor(indices)]\n",
" return vecs\n",
"\n",
" def __len__(self):\n",
" return len(self.idx_to_token)"
]
},
{
"cell_type": "markdown",
"id": "6375fd2e",
"metadata": {
"origin_pos": 8
},
"source": [
"下面我们加载50维GloVe嵌入(在维基百科的子集上预训练)。创建`TokenEmbedding`实例时,如果尚未下载指定的嵌入文件,则必须下载该文件。\n"
]
},
{
"cell_type": "code",
"execution_count": 4,
"id": "ac49581b",
"metadata": {
"execution": {
"iopub.execute_input": "2023-08-18T07:06:43.312986Z",
"iopub.status.busy": "2023-08-18T07:06:43.312409Z",
"iopub.status.idle": "2023-08-18T07:06:54.396038Z",
"shell.execute_reply": "2023-08-18T07:06:54.395176Z"
},
"origin_pos": 9,
"tab": [
"pytorch"
]
},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"Downloading ../data/glove.6B.50d.zip from http://d2l-data.s3-accelerate.amazonaws.com/glove.6B.50d.zip...\n"
]
}
],
"source": [
"glove_6b50d = TokenEmbedding('glove.6b.50d')"
]
},
{
"cell_type": "markdown",
"id": "57f30d4e",
"metadata": {
"origin_pos": 10
},
"source": [
"输出词表大小。词表包含400000个词(词元)和一个特殊的未知词元。\n"
]
},
{
"cell_type": "code",
"execution_count": 5,
"id": "5d91a982",
"metadata": {
"execution": {
"iopub.execute_input": "2023-08-18T07:06:54.400162Z",
"iopub.status.busy": "2023-08-18T07:06:54.399579Z",
"iopub.status.idle": "2023-08-18T07:06:54.405466Z",
"shell.execute_reply": "2023-08-18T07:06:54.404676Z"
},
"origin_pos": 11,
"tab": [
"pytorch"
]
},
"outputs": [
{
"data": {
"text/plain": [
"400001"
]
},
"execution_count": 5,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"len(glove_6b50d)"
]
},
{
"cell_type": "markdown",
"id": "867f2106",
"metadata": {
"origin_pos": 12
},
"source": [
"我们可以得到词表中一个单词的索引,反之亦然。\n"
]
},
{
"cell_type": "code",
"execution_count": 6,
"id": "6e10f262",
"metadata": {
"execution": {
"iopub.execute_input": "2023-08-18T07:06:54.408746Z",
"iopub.status.busy": "2023-08-18T07:06:54.408294Z",
"iopub.status.idle": "2023-08-18T07:06:54.413468Z",
"shell.execute_reply": "2023-08-18T07:06:54.412687Z"
},
"origin_pos": 13,
"tab": [
"pytorch"
]
},
"outputs": [
{
"data": {
"text/plain": [
"(3367, 'beautiful')"
]
},
"execution_count": 6,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"glove_6b50d.token_to_idx['beautiful'], glove_6b50d.idx_to_token[3367]"
]
},
{
"cell_type": "markdown",
"id": "92b6c303",
"metadata": {
"origin_pos": 14
},
"source": [
"## 应用预训练词向量\n",
"\n",
"使用加载的GloVe向量,我们将通过下面的词相似性和类比任务中来展示词向量的语义。\n",
"\n",
"### 词相似度\n",
"\n",
"与 :numref:`subsec_apply-word-embed`类似,为了根据词向量之间的余弦相似性为输入词查找语义相似的词,我们实现了以下`knn`($k$近邻)函数。\n"
]
},
{
"cell_type": "code",
"execution_count": 7,
"id": "2da78732",
"metadata": {
"execution": {
"iopub.execute_input": "2023-08-18T07:06:54.416901Z",
"iopub.status.busy": "2023-08-18T07:06:54.416268Z",
"iopub.status.idle": "2023-08-18T07:06:54.421648Z",
"shell.execute_reply": "2023-08-18T07:06:54.420466Z"
},
"origin_pos": 16,
"tab": [
"pytorch"
]
},
"outputs": [],
"source": [
"def knn(W, x, k):\n",
" # 增加1e-9以获得数值稳定性\n",
" cos = torch.mv(W, x.reshape(-1,)) / (\n",
" torch.sqrt(torch.sum(W * W, axis=1) + 1e-9) *\n",
" torch.sqrt((x * x).sum()))\n",
" _, topk = torch.topk(cos, k=k)\n",
" return topk, [cos[int(i)] for i in topk]"
]
},
{
"cell_type": "markdown",
"id": "644a758d",
"metadata": {
"origin_pos": 18
},
"source": [
"然后,我们使用`TokenEmbedding`的实例`embed`中预训练好的词向量来搜索相似的词。\n"
]
},
{
"cell_type": "code",
"execution_count": 8,
"id": "7b1da561",
"metadata": {
"execution": {
"iopub.execute_input": "2023-08-18T07:06:54.425376Z",
"iopub.status.busy": "2023-08-18T07:06:54.424618Z",
"iopub.status.idle": "2023-08-18T07:06:54.430025Z",
"shell.execute_reply": "2023-08-18T07:06:54.428981Z"
},
"origin_pos": 19,
"tab": [
"pytorch"
]
},
"outputs": [],
"source": [
"def get_similar_tokens(query_token, k, embed):\n",
" topk, cos = knn(embed.idx_to_vec, embed[[query_token]], k + 1)\n",
" for i, c in zip(topk[1:], cos[1:]): # 排除输入词\n",
" print(f'{embed.idx_to_token[int(i)]}cosine相似度={float(c):.3f}')"
]
},
{
"cell_type": "markdown",
"id": "6ba6f5c8",
"metadata": {
"origin_pos": 20
},
"source": [
"`glove_6b50d`中预训练词向量的词表包含400000个词和一个特殊的未知词元。排除输入词和未知词元后,我们在词表中找到与“chip”一词语义最相似的三个词。\n"
]
},
{
"cell_type": "code",
"execution_count": 9,
"id": "623bc4a9",
"metadata": {
"execution": {
"iopub.execute_input": "2023-08-18T07:06:54.433258Z",
"iopub.status.busy": "2023-08-18T07:06:54.432943Z",
"iopub.status.idle": "2023-08-18T07:06:54.481827Z",
"shell.execute_reply": "2023-08-18T07:06:54.480628Z"
},
"origin_pos": 21,
"tab": [
"pytorch"
]
},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"chipscosine相似度=0.856\n",
"intelcosine相似度=0.749\n",
"electronicscosine相似度=0.749\n"
]
}
],
"source": [
"get_similar_tokens('chip', 3, glove_6b50d)"
]
},
{
"cell_type": "markdown",
"id": "c18fa17a",
"metadata": {
"origin_pos": 22
},
"source": [
"下面输出与“baby”和“beautiful”相似的词。\n"
]
},
{
"cell_type": "code",
"execution_count": 10,
"id": "d2fd5e8f",
"metadata": {
"execution": {
"iopub.execute_input": "2023-08-18T07:06:54.486458Z",
"iopub.status.busy": "2023-08-18T07:06:54.485962Z",
"iopub.status.idle": "2023-08-18T07:06:54.508991Z",
"shell.execute_reply": "2023-08-18T07:06:54.507938Z"
},
"origin_pos": 23,
"tab": [
"pytorch"
]
},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"babiescosine相似度=0.839\n",
"boycosine相似度=0.800\n",
"girlcosine相似度=0.792\n"
]
}
],
"source": [
"get_similar_tokens('baby', 3, glove_6b50d)"
]
},
{
"cell_type": "code",
"execution_count": 11,
"id": "faa9e2e2",
"metadata": {
"execution": {
"iopub.execute_input": "2023-08-18T07:06:54.513356Z",
"iopub.status.busy": "2023-08-18T07:06:54.512976Z",
"iopub.status.idle": "2023-08-18T07:06:54.534489Z",
"shell.execute_reply": "2023-08-18T07:06:54.533425Z"
},
"origin_pos": 24,
"tab": [
"pytorch"
]
},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"lovelycosine相似度=0.921\n",
"gorgeouscosine相似度=0.893\n",
"wonderfulcosine相似度=0.830\n"
]
}
],
"source": [
"get_similar_tokens('beautiful', 3, glove_6b50d)"
]
},
{
"cell_type": "markdown",
"id": "5cc0553d",
"metadata": {
"origin_pos": 25
},
"source": [
"### 词类比\n",
"\n",
"除了找到相似的词,我们还可以将词向量应用到词类比任务中。\n",
"例如,“man” : “woman” :: “son” : “daughter”是一个词的类比。\n",
"“man”是对“woman”的类比,“son”是对“daughter”的类比。\n",
"具体来说,词类比任务可以定义为:\n",
"对于单词类比$a : b :: c : d$,给出前三个词$a$、$b$和$c$,找到$d$。\n",
"用$\\text{vec}(w)$表示词$w$的向量,\n",
"为了完成这个类比,我们将找到一个词,\n",
"其向量与$\\text{vec}(c)+\\text{vec}(b)-\\text{vec}(a)$的结果最相似。\n"
]
},
{
"cell_type": "code",
"execution_count": 12,
"id": "e5340469",
"metadata": {
"execution": {
"iopub.execute_input": "2023-08-18T07:06:54.539108Z",
"iopub.status.busy": "2023-08-18T07:06:54.538593Z",
"iopub.status.idle": "2023-08-18T07:06:54.544150Z",
"shell.execute_reply": "2023-08-18T07:06:54.543191Z"
},
"origin_pos": 26,
"tab": [
"pytorch"
]
},
"outputs": [],
"source": [
"def get_analogy(token_a, token_b, token_c, embed):\n",
" vecs = embed[[token_a, token_b, token_c]]\n",
" x = vecs[1] - vecs[0] + vecs[2]\n",
" topk, cos = knn(embed.idx_to_vec, x, 1)\n",
" return embed.idx_to_token[int(topk[0])] # 删除未知词"
]
},
{
"cell_type": "markdown",
"id": "df8f2721",
"metadata": {
"origin_pos": 27
},
"source": [
"让我们使用加载的词向量来验证“male-female”类比。\n"
]
},
{
"cell_type": "code",
"execution_count": 13,
"id": "e91de1ce",
"metadata": {
"execution": {
"iopub.execute_input": "2023-08-18T07:06:54.548236Z",
"iopub.status.busy": "2023-08-18T07:06:54.547963Z",
"iopub.status.idle": "2023-08-18T07:06:54.569097Z",
"shell.execute_reply": "2023-08-18T07:06:54.568018Z"
},
"origin_pos": 28,
"tab": [
"pytorch"
]
},
"outputs": [
{
"data": {
"text/plain": [
"'daughter'"
]
},
"execution_count": 13,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"get_analogy('man', 'woman', 'son', glove_6b50d)"
]
},
{
"cell_type": "markdown",
"id": "d9b1ce80",
"metadata": {
"origin_pos": 29
},
"source": [
"下面完成一个“首都-国家”的类比:\n",
"“beijing” : “china” :: “tokyo” : “japan”。\n",
"这说明了预训练词向量中的语义。\n"
]
},
{
"cell_type": "code",
"execution_count": 14,
"id": "16eb56d3",
"metadata": {
"execution": {
"iopub.execute_input": "2023-08-18T07:06:54.573551Z",
"iopub.status.busy": "2023-08-18T07:06:54.573270Z",
"iopub.status.idle": "2023-08-18T07:06:54.595104Z",
"shell.execute_reply": "2023-08-18T07:06:54.594092Z"
},
"origin_pos": 30,
"tab": [
"pytorch"
]
},
"outputs": [
{
"data": {
"text/plain": [
"'japan'"
]
},
"execution_count": 14,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"get_analogy('beijing', 'china', 'tokyo', glove_6b50d)"
]
},
{
"cell_type": "markdown",
"id": "595634f2",
"metadata": {
"origin_pos": 31
},
"source": [
"另外,对于“bad” : “worst” :: “big” : “biggest”等“形容词-形容词最高级”的比喻,预训练词向量可以捕捉到句法信息。\n"
]
},
{
"cell_type": "code",
"execution_count": 15,
"id": "b8d6395b",
"metadata": {
"execution": {
"iopub.execute_input": "2023-08-18T07:06:54.599698Z",
"iopub.status.busy": "2023-08-18T07:06:54.599313Z",
"iopub.status.idle": "2023-08-18T07:06:54.621533Z",
"shell.execute_reply": "2023-08-18T07:06:54.620486Z"
},
"origin_pos": 32,
"tab": [
"pytorch"
]
},
"outputs": [
{
"data": {
"text/plain": [
"'biggest'"
]
},
"execution_count": 15,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"get_analogy('bad', 'worst', 'big', glove_6b50d)"
]
},
{
"cell_type": "markdown",
"id": "a6555f30",
"metadata": {
"origin_pos": 33
},
"source": [
"为了演示在预训练词向量中捕捉到的过去式概念,我们可以使用“现在式-过去式”的类比来测试句法:“do” : “did” :: “go” : “went”。\n"
]
},
{
"cell_type": "code",
"execution_count": 16,
"id": "986fa401",
"metadata": {
"execution": {
"iopub.execute_input": "2023-08-18T07:06:54.626086Z",
"iopub.status.busy": "2023-08-18T07:06:54.625554Z",
"iopub.status.idle": "2023-08-18T07:06:54.647570Z",
"shell.execute_reply": "2023-08-18T07:06:54.646604Z"
},
"origin_pos": 34,
"tab": [
"pytorch"
]
},
"outputs": [
{
"data": {
"text/plain": [
"'went'"
]
},
"execution_count": 16,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"get_analogy('do', 'did', 'go', glove_6b50d)"
]
},
{
"cell_type": "markdown",
"id": "61371af5",
"metadata": {
"origin_pos": 35
},
"source": [
"## 小结\n",
"\n",
"* 在实践中,在大型语料库上预先练的词向量可以应用于下游的自然语言处理任务。\n",
"* 预训练的词向量可以应用于词的相似性和类比任务。\n",
"\n",
"## 练习\n",
"\n",
"1. 使用`TokenEmbedding('wiki.en')`测试fastText结果。\n",
"1. 当词表非常大时,我们怎样才能更快地找到相似的词或完成一个词的类比呢?\n"
]
},
{
"cell_type": "markdown",
"id": "bfebf384",
"metadata": {
"origin_pos": 37,
"tab": [
"pytorch"
]
},
"source": [
"[Discussions](https://discuss.d2l.ai/t/5746)\n"
]
}
],
"metadata": {
"language_info": {
"name": "python"
},
"required_libs": []
},
"nbformat": 4,
"nbformat_minor": 5
}
@@ -0,0 +1,447 @@
{
"cells": [
{
"cell_type": "markdown",
"id": "2eef48dc",
"metadata": {
"origin_pos": 0
},
"source": [
"# 子词嵌入\n",
":label:`sec_fasttext`\n",
"\n",
"在英语中,“helps”“helped”和“helping”等单词都是同一个词“help”的变形形式。“dog”和“dogs”之间的关系与“cat”和“cats”之间的关系相同,“boy”和“boyfriend”之间的关系与“girl”和“girlfriend”之间的关系相同。在法语和西班牙语等其他语言中,许多动词有40多种变形形式,而在芬兰语中,名词最多可能有15种变形。在语言学中,形态学研究单词形成和词汇关系。但是,word2vec和GloVe都没有对词的内部结构进行探讨。\n",
"\n",
"## fastText模型\n",
"\n",
"回想一下词在word2vec中是如何表示的。在跳元模型和连续词袋模型中,同一词的不同变形形式直接由不同的向量表示,不需要共享参数。为了使用形态信息,*fastText模型*提出了一种*子词嵌入*方法,其中子词是一个字符$n$-gram :cite:`Bojanowski.Grave.Joulin.ea.2017`。fastText可以被认为是子词级跳元模型,而非学习词级向量表示,其中每个*中心词*由其子词级向量之和表示。\n",
"\n",
"让我们来说明如何以单词“where”为例获得fastText中每个中心词的子词。首先,在词的开头和末尾添加特殊字符“&lt;”和“&gt;”,以将前缀和后缀与其他子词区分开来。\n",
"然后,从词中提取字符$n$-gram。\n",
"例如,值$n=3$时,我们将获得长度为3的所有子词:\n",
"“&lt;wh”“whe”“her”“ere”“re&gt;”和特殊子词“&lt;where&gt;”。\n",
"\n",
"在fastText中,对于任意词$w$,用$\\mathcal{G}_w$表示其长度在3和6之间的所有子词与其特殊子词的并集。词表是所有词的子词的集合。假设$\\mathbf{z}_g$是词典中的子词$g$的向量,则跳元模型中作为中心词的词$w$的向量$\\mathbf{v}_w$是其子词向量的和:\n",
"\n",
"$$\\mathbf{v}_w = \\sum_{g\\in\\mathcal{G}_w} \\mathbf{z}_g.$$\n",
"\n",
"fastText的其余部分与跳元模型相同。与跳元模型相比,fastText的词量更大,模型参数也更多。此外,为了计算一个词的表示,它的所有子词向量都必须求和,这导致了更高的计算复杂度。然而,由于具有相似结构的词之间共享来自子词的参数,罕见词甚至词表外的词在fastText中可能获得更好的向量表示。\n",
"\n",
"## 字节对编码(Byte Pair Encoding\n",
":label:`subsec_Byte_Pair_Encoding`\n",
"\n",
"在fastText中,所有提取的子词都必须是指定的长度,例如$3$到$6$,因此词表大小不能预定义。为了在固定大小的词表中允许可变长度的子词,我们可以应用一种称为*字节对编码*(Byte Pair EncodingBPE)的压缩算法来提取子词 :cite:`Sennrich.Haddow.Birch.2015`。\n",
"\n",
"字节对编码执行训练数据集的统计分析,以发现单词内的公共符号,诸如任意长度的连续字符。从长度为1的符号开始,字节对编码迭代地合并最频繁的连续符号对以产生新的更长的符号。请注意,为提高效率,不考虑跨越单词边界的对。最后,我们可以使用像子词这样的符号来切分单词。字节对编码及其变体已经用于诸如GPT-2 :cite:`Radford.Wu.Child.ea.2019`和RoBERTa :cite:`Liu.Ott.Goyal.ea.2019`等自然语言处理预训练模型中的输入表示。在下面,我们将说明字节对编码是如何工作的。\n",
"\n",
"首先,我们将符号词表初始化为所有英文小写字符、特殊的词尾符号`'_'`和特殊的未知符号`'[UNK]'`。\n"
]
},
{
"cell_type": "code",
"execution_count": 1,
"id": "70df59d8",
"metadata": {
"execution": {
"iopub.execute_input": "2023-08-18T06:56:35.604170Z",
"iopub.status.busy": "2023-08-18T06:56:35.603510Z",
"iopub.status.idle": "2023-08-18T06:56:35.611979Z",
"shell.execute_reply": "2023-08-18T06:56:35.611231Z"
},
"origin_pos": 1,
"tab": [
"pytorch"
]
},
"outputs": [],
"source": [
"import collections\n",
"\n",
"symbols = ['a', 'b', 'c', 'd', 'e', 'f', 'g', 'h', 'i', 'j', 'k', 'l', 'm',\n",
" 'n', 'o', 'p', 'q', 'r', 's', 't', 'u', 'v', 'w', 'x', 'y', 'z',\n",
" '_', '[UNK]']"
]
},
{
"cell_type": "markdown",
"id": "f94dab27",
"metadata": {
"origin_pos": 3
},
"source": [
"因为我们不考虑跨越词边界的符号对,所以我们只需要一个字典`raw_token_freqs`将词映射到数据集中的频率(出现次数)。注意,特殊符号`'_'`被附加到每个词的尾部,以便我们可以容易地从输出符号序列(例如,“a_all er_man”)恢复单词序列(例如,“a_all er_man”)。由于我们仅从单个字符和特殊符号的词开始合并处理,所以在每个词(词典`token_freqs`的键)内的每对连续字符之间插入空格。换句话说,空格是词中符号之间的分隔符。\n"
]
},
{
"cell_type": "code",
"execution_count": 2,
"id": "6a26ec96",
"metadata": {
"execution": {
"iopub.execute_input": "2023-08-18T06:56:35.615843Z",
"iopub.status.busy": "2023-08-18T06:56:35.615201Z",
"iopub.status.idle": "2023-08-18T06:56:35.623942Z",
"shell.execute_reply": "2023-08-18T06:56:35.623209Z"
},
"origin_pos": 4,
"tab": [
"pytorch"
]
},
"outputs": [
{
"data": {
"text/plain": [
"{'f a s t _': 4, 'f a s t e r _': 3, 't a l l _': 5, 't a l l e r _': 4}"
]
},
"execution_count": 2,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"raw_token_freqs = {'fast_': 4, 'faster_': 3, 'tall_': 5, 'taller_': 4}\n",
"token_freqs = {}\n",
"for token, freq in raw_token_freqs.items():\n",
" token_freqs[' '.join(list(token))] = raw_token_freqs[token]\n",
"token_freqs"
]
},
{
"cell_type": "markdown",
"id": "8ee2d216",
"metadata": {
"origin_pos": 5
},
"source": [
"我们定义以下`get_max_freq_pair`函数,其返回词内最频繁的连续符号对,其中词来自输入词典`token_freqs`的键。\n"
]
},
{
"cell_type": "code",
"execution_count": 3,
"id": "874de73a",
"metadata": {
"execution": {
"iopub.execute_input": "2023-08-18T06:56:35.627616Z",
"iopub.status.busy": "2023-08-18T06:56:35.627025Z",
"iopub.status.idle": "2023-08-18T06:56:35.631950Z",
"shell.execute_reply": "2023-08-18T06:56:35.631221Z"
},
"origin_pos": 6,
"tab": [
"pytorch"
]
},
"outputs": [],
"source": [
"def get_max_freq_pair(token_freqs):\n",
" pairs = collections.defaultdict(int)\n",
" for token, freq in token_freqs.items():\n",
" symbols = token.split()\n",
" for i in range(len(symbols) - 1):\n",
" # “pairs”的键是两个连续符号的元组\n",
" pairs[symbols[i], symbols[i + 1]] += freq\n",
" return max(pairs, key=pairs.get) # 具有最大值的“pairs”键"
]
},
{
"cell_type": "markdown",
"id": "701a4399",
"metadata": {
"origin_pos": 7
},
"source": [
"作为基于连续符号频率的贪心方法,字节对编码将使用以下`merge_symbols`函数来合并最频繁的连续符号对以产生新符号。\n"
]
},
{
"cell_type": "code",
"execution_count": 4,
"id": "877dce88",
"metadata": {
"execution": {
"iopub.execute_input": "2023-08-18T06:56:35.635554Z",
"iopub.status.busy": "2023-08-18T06:56:35.634913Z",
"iopub.status.idle": "2023-08-18T06:56:35.639631Z",
"shell.execute_reply": "2023-08-18T06:56:35.638892Z"
},
"origin_pos": 8,
"tab": [
"pytorch"
]
},
"outputs": [],
"source": [
"def merge_symbols(max_freq_pair, token_freqs, symbols):\n",
" symbols.append(''.join(max_freq_pair))\n",
" new_token_freqs = dict()\n",
" for token, freq in token_freqs.items():\n",
" new_token = token.replace(' '.join(max_freq_pair),\n",
" ''.join(max_freq_pair))\n",
" new_token_freqs[new_token] = token_freqs[token]\n",
" return new_token_freqs"
]
},
{
"cell_type": "markdown",
"id": "63e888f9",
"metadata": {
"origin_pos": 9
},
"source": [
"现在,我们对词典`token_freqs`的键迭代地执行字节对编码算法。在第一次迭代中,最频繁的连续符号对是`'t'`和`'a'`,因此字节对编码将它们合并以产生新符号`'ta'`。在第二次迭代中,字节对编码继续合并`'ta'`和`'l'`以产生另一个新符号`'tal'`。\n"
]
},
{
"cell_type": "code",
"execution_count": 5,
"id": "ea95bc7c",
"metadata": {
"execution": {
"iopub.execute_input": "2023-08-18T06:56:35.643247Z",
"iopub.status.busy": "2023-08-18T06:56:35.642643Z",
"iopub.status.idle": "2023-08-18T06:56:35.647847Z",
"shell.execute_reply": "2023-08-18T06:56:35.647061Z"
},
"origin_pos": 10,
"tab": [
"pytorch"
]
},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"合并# 1: ('t', 'a')\n",
"合并# 2: ('ta', 'l')\n",
"合并# 3: ('tal', 'l')\n",
"合并# 4: ('f', 'a')\n",
"合并# 5: ('fa', 's')\n",
"合并# 6: ('fas', 't')\n",
"合并# 7: ('e', 'r')\n",
"合并# 8: ('er', '_')\n",
"合并# 9: ('tall', '_')\n",
"合并# 10: ('fast', '_')\n"
]
}
],
"source": [
"num_merges = 10\n",
"for i in range(num_merges):\n",
" max_freq_pair = get_max_freq_pair(token_freqs)\n",
" token_freqs = merge_symbols(max_freq_pair, token_freqs, symbols)\n",
" print(f'合并# {i+1}:',max_freq_pair)"
]
},
{
"cell_type": "markdown",
"id": "4fe6d30f",
"metadata": {
"origin_pos": 11
},
"source": [
"在字节对编码的10次迭代之后,我们可以看到列表`symbols`现在又包含10个从其他符号迭代合并而来的符号。\n"
]
},
{
"cell_type": "code",
"execution_count": 6,
"id": "14d6459f",
"metadata": {
"execution": {
"iopub.execute_input": "2023-08-18T06:56:35.651408Z",
"iopub.status.busy": "2023-08-18T06:56:35.650818Z",
"iopub.status.idle": "2023-08-18T06:56:35.654893Z",
"shell.execute_reply": "2023-08-18T06:56:35.654143Z"
},
"origin_pos": 12,
"tab": [
"pytorch"
]
},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"['a', 'b', 'c', 'd', 'e', 'f', 'g', 'h', 'i', 'j', 'k', 'l', 'm', 'n', 'o', 'p', 'q', 'r', 's', 't', 'u', 'v', 'w', 'x', 'y', 'z', '_', '[UNK]', 'ta', 'tal', 'tall', 'fa', 'fas', 'fast', 'er', 'er_', 'tall_', 'fast_']\n"
]
}
],
"source": [
"print(symbols)"
]
},
{
"cell_type": "markdown",
"id": "70283228",
"metadata": {
"origin_pos": 13
},
"source": [
"对于在词典`raw_token_freqs`的键中指定的同一数据集,作为字节对编码算法的结果,数据集中的每个词现在被子词“fast_”“fast”“er_”“tall_”和“tall”分割。例如,单词“fast er_”和“tall er_”分别被分割为“fast er_”和“tall er_”。\n"
]
},
{
"cell_type": "code",
"execution_count": 7,
"id": "93120bf0",
"metadata": {
"execution": {
"iopub.execute_input": "2023-08-18T06:56:35.658487Z",
"iopub.status.busy": "2023-08-18T06:56:35.657897Z",
"iopub.status.idle": "2023-08-18T06:56:35.662020Z",
"shell.execute_reply": "2023-08-18T06:56:35.661268Z"
},
"origin_pos": 14,
"tab": [
"pytorch"
]
},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"['fast_', 'fast er_', 'tall_', 'tall er_']\n"
]
}
],
"source": [
"print(list(token_freqs.keys()))"
]
},
{
"cell_type": "markdown",
"id": "83456139",
"metadata": {
"origin_pos": 15
},
"source": [
"请注意,字节对编码的结果取决于正在使用的数据集。我们还可以使用从一个数据集学习的子词来切分另一个数据集的单词。作为一种贪心方法,下面的`segment_BPE`函数尝试将单词从输入参数`symbols`分成可能最长的子词。\n"
]
},
{
"cell_type": "code",
"execution_count": 8,
"id": "04e84fc1",
"metadata": {
"execution": {
"iopub.execute_input": "2023-08-18T06:56:35.665538Z",
"iopub.status.busy": "2023-08-18T06:56:35.664918Z",
"iopub.status.idle": "2023-08-18T06:56:35.670601Z",
"shell.execute_reply": "2023-08-18T06:56:35.669830Z"
},
"origin_pos": 16,
"tab": [
"pytorch"
]
},
"outputs": [],
"source": [
"def segment_BPE(tokens, symbols):\n",
" outputs = []\n",
" for token in tokens:\n",
" start, end = 0, len(token)\n",
" cur_output = []\n",
" # 具有符号中可能最长子字的词元段\n",
" while start < len(token) and start < end:\n",
" if token[start: end] in symbols:\n",
" cur_output.append(token[start: end])\n",
" start = end\n",
" end = len(token)\n",
" else:\n",
" end -= 1\n",
" if start < len(token):\n",
" cur_output.append('[UNK]')\n",
" outputs.append(' '.join(cur_output))\n",
" return outputs"
]
},
{
"cell_type": "markdown",
"id": "ce7118c8",
"metadata": {
"origin_pos": 17
},
"source": [
"我们使用列表`symbols`中的子词(从前面提到的数据集学习)来表示另一个数据集的`tokens`。\n"
]
},
{
"cell_type": "code",
"execution_count": 9,
"id": "00e7e03a",
"metadata": {
"execution": {
"iopub.execute_input": "2023-08-18T06:56:35.674172Z",
"iopub.status.busy": "2023-08-18T06:56:35.673554Z",
"iopub.status.idle": "2023-08-18T06:56:35.677812Z",
"shell.execute_reply": "2023-08-18T06:56:35.677058Z"
},
"origin_pos": 18,
"tab": [
"pytorch"
]
},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"['tall e s t _', 'fa t t er_']\n"
]
}
],
"source": [
"tokens = ['tallest_', 'fatter_']\n",
"print(segment_BPE(tokens, symbols))"
]
},
{
"cell_type": "markdown",
"id": "b4a5e47c",
"metadata": {
"origin_pos": 19
},
"source": [
"## 小结\n",
"\n",
"* fastText模型提出了一种子词嵌入方法:基于word2vec中的跳元模型,它将中心词表示为其子词向量之和。\n",
"* 字节对编码执行训练数据集的统计分析,以发现词内的公共符号。作为一种贪心方法,字节对编码迭代地合并最频繁的连续符号对。\n",
"* 子词嵌入可以提高稀有词和词典外词的表示质量。\n",
"\n",
"## 练习\n",
"\n",
"1. 例如,英语中大约有$3\\times 10^8$种可能的$6$-元组。子词太多会有什么问题呢?如何解决这个问题?提示:请参阅fastText论文第3.2节末尾 :cite:`Bojanowski.Grave.Joulin.ea.2017`。\n",
"1. 如何在连续词袋模型的基础上设计一个子词嵌入模型?\n",
"1. 要获得大小为$m$的词表,当初始符号词表大小为$n$时,需要多少合并操作?\n",
"1. 如何扩展字节对编码的思想来提取短语?\n"
]
},
{
"cell_type": "markdown",
"id": "09bfff94",
"metadata": {
"origin_pos": 21,
"tab": [
"pytorch"
]
},
"source": [
"[Discussions](https://discuss.d2l.ai/t/5748)\n"
]
}
],
"metadata": {
"language_info": {
"name": "python"
},
"required_libs": []
},
"nbformat": 4,
"nbformat_minor": 5
}
@@ -0,0 +1,145 @@
{
"cells": [
{
"cell_type": "markdown",
"id": "9594aa73",
"metadata": {
"origin_pos": 0
},
"source": [
"# 词嵌入(word2vec\n",
":label:`sec_word2vec`\n",
"\n",
"自然语言是用来表达人脑思维的复杂系统。\n",
"在这个系统中,词是意义的基本单元。顾名思义,\n",
"*词向量*是用于表示单词意义的向量,\n",
"并且还可以被认为是单词的特征向量或表示。\n",
"将单词映射到实向量的技术称为*词嵌入*。\n",
"近年来,词嵌入逐渐成为自然语言处理的基础知识。\n",
"\n",
"## 为何独热向量是一个糟糕的选择\n",
"\n",
"在 :numref:`sec_rnn_scratch`中,我们使用独热向量来表示词(字符就是单词)。假设词典中不同词的数量(词典大小)为$N$,每个词对应一个从$0$到$N−1$的不同整数(索引)。为了得到索引为$i$的任意词的独热向量表示,我们创建了一个全为0的长度为$N$的向量,并将位置$i$的元素设置为1。这样,每个词都被表示为一个长度为$N$的向量,可以直接由神经网络使用。\n",
"\n",
"虽然独热向量很容易构建,但它们通常不是一个好的选择。一个主要原因是独热向量不能准确表达不同词之间的相似度,比如我们经常使用的“余弦相似度”。对于向量$\\mathbf{x}, \\mathbf{y} \\in \\mathbb{R}^d$,它们的余弦相似度是它们之间角度的余弦:\n",
"\n",
"$$\\frac{\\mathbf{x}^\\top \\mathbf{y}}{\\|\\mathbf{x}\\| \\|\\mathbf{y}\\|} \\in [-1, 1].$$\n",
"\n",
"由于任意两个不同词的独热向量之间的余弦相似度为0,所以独热向量不能编码词之间的相似性。\n",
"\n",
"## 自监督的word2vec\n",
"\n",
"[word2vec](https://code.google.com/archive/p/word2vec/)工具是为了解决上述问题而提出的。它将每个词映射到一个固定长度的向量,这些向量能更好地表达不同词之间的相似性和类比关系。word2vec工具包含两个模型,即*跳元模型*skip-gram :cite:`Mikolov.Sutskever.Chen.ea.2013`和*连续词袋*CBOW :cite:`Mikolov.Chen.Corrado.ea.2013`。对于在语义上有意义的表示,它们的训练依赖于条件概率,条件概率可以被看作使用语料库中一些词来预测另一些单词。由于是不带标签的数据,因此跳元模型和连续词袋都是自监督模型。\n",
"\n",
"下面,我们将介绍这两种模式及其训练方法。\n",
"\n",
"## 跳元模型(Skip-Gram\n",
":label:`subsec_skip-gram`\n",
"\n",
"跳元模型假设一个词可以用来在文本序列中生成其周围的单词。以文本序列“the”“man”“loves”“his”“son”为例。假设*中心词*选择“loves”,并将上下文窗口设置为2,如图 :numref:`fig_skip_gram`所示,给定中心词“loves”,跳元模型考虑生成*上下文词*“the”“man”“him”“son”的条件概率:\n",
"\n",
"$$P(\\textrm{\"the\"},\\textrm{\"man\"},\\textrm{\"his\"},\\textrm{\"son\"}\\mid\\textrm{\"loves\"}).$$\n",
"\n",
"假设上下文词是在给定中心词的情况下独立生成的(即条件独立性)。在这种情况下,上述条件概率可以重写为:\n",
"\n",
"$$P(\\textrm{\"the\"}\\mid\\textrm{\"loves\"})\\cdot P(\\textrm{\"man\"}\\mid\\textrm{\"loves\"})\\cdot P(\\textrm{\"his\"}\\mid\\textrm{\"loves\"})\\cdot P(\\textrm{\"son\"}\\mid\\textrm{\"loves\"}).$$\n",
"\n",
"![跳元模型考虑了在给定中心词的情况下生成周围上下文词的条件概率](../img/skip-gram.svg)\n",
":label:`fig_skip_gram`\n",
"\n",
"在跳元模型中,每个词都有两个$d$维向量表示,用于计算条件概率。更具体地说,对于词典中索引为$i$的任何词,分别用$\\mathbf{v}_i\\in\\mathbb{R}^d$和$\\mathbf{u}_i\\in\\mathbb{R}^d$表示其用作*中心词*和*上下文词*时的两个向量。给定中心词$w_c$(词典中的索引$c$),生成任何上下文词$w_o$(词典中的索引$o$)的条件概率可以通过对向量点积的softmax操作来建模:\n",
"\n",
"$$P(w_o \\mid w_c) = \\frac{\\text{exp}(\\mathbf{u}_o^\\top \\mathbf{v}_c)}{ \\sum_{i \\in \\mathcal{V}} \\text{exp}(\\mathbf{u}_i^\\top \\mathbf{v}_c)},$$\n",
":eqlabel:`eq_skip-gram-softmax`\n",
"\n",
"其中词表索引集$\\mathcal{V} = \\{0, 1, \\ldots, |\\mathcal{V}|-1\\}$。给定长度为$T$的文本序列,其中时间步$t$处的词表示为$w^{(t)}$。假设上下文词是在给定任何中心词的情况下独立生成的。对于上下文窗口$m$,跳元模型的似然函数是在给定任何中心词的情况下生成所有上下文词的概率:\n",
"\n",
"$$ \\prod_{t=1}^{T} \\prod_{-m \\leq j \\leq m,\\ j \\neq 0} P(w^{(t+j)} \\mid w^{(t)}),$$\n",
"\n",
"其中可以省略小于$1$或大于$T$的任何时间步。\n",
"\n",
"### 训练\n",
"\n",
"跳元模型参数是词表中每个词的中心词向量和上下文词向量。在训练中,我们通过最大化似然函数(即极大似然估计)来学习模型参数。这相当于最小化以下损失函数:\n",
"\n",
"$$ - \\sum_{t=1}^{T} \\sum_{-m \\leq j \\leq m,\\ j \\neq 0} \\text{log}\\, P(w^{(t+j)} \\mid w^{(t)}).$$\n",
"\n",
"当使用随机梯度下降来最小化损失时,在每次迭代中可以随机抽样一个较短的子序列来计算该子序列的(随机)梯度,以更新模型参数。为了计算该(随机)梯度,我们需要获得对数条件概率关于中心词向量和上下文词向量的梯度。通常,根据 :eqref:`eq_skip-gram-softmax`,涉及中心词$w_c$和上下文词$w_o$的对数条件概率为:\n",
"\n",
"$$\\log P(w_o \\mid w_c) =\\mathbf{u}_o^\\top \\mathbf{v}_c - \\log\\left(\\sum_{i \\in \\mathcal{V}} \\text{exp}(\\mathbf{u}_i^\\top \\mathbf{v}_c)\\right).$$\n",
":eqlabel:`eq_skip-gram-log`\n",
"\n",
"通过微分,我们可以获得其相对于中心词向量$\\mathbf{v}_c$的梯度为\n",
"\n",
"$$\\begin{aligned}\\frac{\\partial \\text{log}\\, P(w_o \\mid w_c)}{\\partial \\mathbf{v}_c}&= \\mathbf{u}_o - \\frac{\\sum_{j \\in \\mathcal{V}} \\exp(\\mathbf{u}_j^\\top \\mathbf{v}_c)\\mathbf{u}_j}{\\sum_{i \\in \\mathcal{V}} \\exp(\\mathbf{u}_i^\\top \\mathbf{v}_c)}\\\\&= \\mathbf{u}_o - \\sum_{j \\in \\mathcal{V}} \\left(\\frac{\\text{exp}(\\mathbf{u}_j^\\top \\mathbf{v}_c)}{ \\sum_{i \\in \\mathcal{V}} \\text{exp}(\\mathbf{u}_i^\\top \\mathbf{v}_c)}\\right) \\mathbf{u}_j\\\\&= \\mathbf{u}_o - \\sum_{j \\in \\mathcal{V}} P(w_j \\mid w_c) \\mathbf{u}_j.\\end{aligned}$$\n",
":eqlabel:`eq_skip-gram-grad`\n",
"\n",
"注意, :eqref:`eq_skip-gram-grad`中的计算需要词典中以$w_c$为中心词的所有词的条件概率。其他词向量的梯度可以以相同的方式获得。\n",
"\n",
"对词典中索引为$i$的词进行训练后,得到$\\mathbf{v}_i$(作为中心词)和$\\mathbf{u}_i$(作为上下文词)两个词向量。在自然语言处理应用中,跳元模型的中心词向量通常用作词表示。\n",
"\n",
"## 连续词袋(CBOW)模型\n",
"\n",
"*连续词袋*(CBOW)模型类似于跳元模型。与跳元模型的主要区别在于,连续词袋模型假设中心词是基于其在文本序列中的周围上下文词生成的。例如,在文本序列“the”“man”“loves”“his”“son”中,在“loves”为中心词且上下文窗口为2的情况下,连续词袋模型考虑基于上下文词“the”“man”“him”“son”(如 :numref:`fig_cbow`所示)生成中心词“loves”的条件概率,即:\n",
"\n",
"$$P(\\textrm{\"loves\"}\\mid\\textrm{\"the\"},\\textrm{\"man\"},\\textrm{\"his\"},\\textrm{\"son\"}).$$\n",
"\n",
"![连续词袋模型考虑了给定周围上下文词生成中心词条件概率](../img/cbow.svg)\n",
":label:`fig_cbow`\n",
"\n",
"\n",
"由于连续词袋模型中存在多个上下文词,因此在计算条件概率时对这些上下文词向量进行平均。具体地说,对于字典中索引$i$的任意词,分别用$\\mathbf{v}_i\\in\\mathbb{R}^d$和$\\mathbf{u}_i\\in\\mathbb{R}^d$表示用作*上下文*词和*中心*词的两个向量(符号与跳元模型中相反)。给定上下文词$w_{o_1}, \\ldots, w_{o_{2m}}$(在词表中索引是$o_1, \\ldots, o_{2m}$)生成任意中心词$w_c$(在词表中索引是$c$)的条件概率可以由以下公式建模:\n",
"\n",
"$$P(w_c \\mid w_{o_1}, \\ldots, w_{o_{2m}}) = \\frac{\\text{exp}\\left(\\frac{1}{2m}\\mathbf{u}_c^\\top (\\mathbf{v}_{o_1} + \\ldots, + \\mathbf{v}_{o_{2m}}) \\right)}{ \\sum_{i \\in \\mathcal{V}} \\text{exp}\\left(\\frac{1}{2m}\\mathbf{u}_i^\\top (\\mathbf{v}_{o_1} + \\ldots, + \\mathbf{v}_{o_{2m}}) \\right)}.$$\n",
":eqlabel:`fig_cbow-full`\n",
"\n",
"为了简洁起见,我们设为$\\mathcal{W}_o= \\{w_{o_1}, \\ldots, w_{o_{2m}}\\}$和$\\bar{\\mathbf{v}}_o = \\left(\\mathbf{v}_{o_1} + \\ldots, + \\mathbf{v}_{o_{2m}} \\right)/(2m)$。那么 :eqref:`fig_cbow-full`可以简化为:\n",
"\n",
"$$P(w_c \\mid \\mathcal{W}_o) = \\frac{\\exp\\left(\\mathbf{u}_c^\\top \\bar{\\mathbf{v}}_o\\right)}{\\sum_{i \\in \\mathcal{V}} \\exp\\left(\\mathbf{u}_i^\\top \\bar{\\mathbf{v}}_o\\right)}.$$\n",
"\n",
"给定长度为$T$的文本序列,其中时间步$t$处的词表示为$w^{(t)}$。对于上下文窗口$m$,连续词袋模型的似然函数是在给定其上下文词的情况下生成所有中心词的概率:\n",
"\n",
"$$ \\prod_{t=1}^{T} P(w^{(t)} \\mid w^{(t-m)}, \\ldots, w^{(t-1)}, w^{(t+1)}, \\ldots, w^{(t+m)}).$$\n",
"\n",
"### 训练\n",
"\n",
"训练连续词袋模型与训练跳元模型几乎是一样的。连续词袋模型的最大似然估计等价于最小化以下损失函数:\n",
"\n",
"$$ -\\sum_{t=1}^T \\text{log}\\, P(w^{(t)} \\mid w^{(t-m)}, \\ldots, w^{(t-1)}, w^{(t+1)}, \\ldots, w^{(t+m)}).$$\n",
"\n",
"请注意,\n",
"\n",
"$$\\log\\,P(w_c \\mid \\mathcal{W}_o) = \\mathbf{u}_c^\\top \\bar{\\mathbf{v}}_o - \\log\\,\\left(\\sum_{i \\in \\mathcal{V}} \\exp\\left(\\mathbf{u}_i^\\top \\bar{\\mathbf{v}}_o\\right)\\right).$$\n",
"\n",
"通过微分,我们可以获得其关于任意上下文词向量$\\mathbf{v}_{o_i}$$i = 1, \\ldots, 2m$)的梯度,如下:\n",
"\n",
"$$\\frac{\\partial \\log\\, P(w_c \\mid \\mathcal{W}_o)}{\\partial \\mathbf{v}_{o_i}} = \\frac{1}{2m} \\left(\\mathbf{u}_c - \\sum_{j \\in \\mathcal{V}} \\frac{\\exp(\\mathbf{u}_j^\\top \\bar{\\mathbf{v}}_o)\\mathbf{u}_j}{ \\sum_{i \\in \\mathcal{V}} \\text{exp}(\\mathbf{u}_i^\\top \\bar{\\mathbf{v}}_o)} \\right) = \\frac{1}{2m}\\left(\\mathbf{u}_c - \\sum_{j \\in \\mathcal{V}} P(w_j \\mid \\mathcal{W}_o) \\mathbf{u}_j \\right).$$\n",
":eqlabel:`eq_cbow-gradient`\n",
"\n",
"其他词向量的梯度可以以相同的方式获得。与跳元模型不同,连续词袋模型通常使用上下文词向量作为词表示。\n",
"\n",
"## 小结\n",
"\n",
"* 词向量是用于表示单词意义的向量,也可以看作词的特征向量。将词映射到实向量的技术称为词嵌入。\n",
"* word2vec工具包含跳元模型和连续词袋模型。\n",
"* 跳元模型假设一个单词可用于在文本序列中,生成其周围的单词;而连续词袋模型假设基于上下文词来生成中心单词。\n",
"\n",
"## 练习\n",
"\n",
"1. 计算每个梯度的计算复杂度是多少?如果词表很大,会有什么问题呢?\n",
"1. 英语中的一些固定短语由多个单词组成,例如“new york”。如何训练它们的词向量?提示:查看word2vec论文的第四节 :cite:`Mikolov.Sutskever.Chen.ea.2013`。\n",
"1. 让我们以跳元模型为例来思考word2vec设计。跳元模型中两个词向量的点积与余弦相似度之间有什么关系?对于语义相似的一对词,为什么它们的词向量(由跳元模型训练)的余弦相似度可能很高?\n",
"\n",
"[Discussions](https://discuss.d2l.ai/t/5744)\n"
]
}
],
"metadata": {
"language_info": {
"name": "python"
},
"required_libs": []
},
"nbformat": 4,
"nbformat_minor": 5
}