{
"cells": [
{
"cell_type": "markdown",
"id": "077b0f04",
"metadata": {
"origin_pos": 0
},
"source": [
"# 机器翻译与数据集\n",
":label:`sec_machine_translation`\n",
"\n",
"语言模型是自然语言处理的关键,\n",
"而*机器翻译*是语言模型最成功的基准测试。\n",
"因为机器翻译正是将输入序列转换成输出序列的\n",
"*序列转换模型*(sequence transduction)的核心问题。\n",
"序列转换模型在各类现代人工智能应用中发挥着至关重要的作用,\n",
"因此我们将其做为本章剩余部分和 :numref:`chap_attention`的重点。\n",
"为此,本节将介绍机器翻译问题及其后文需要使用的数据集。\n",
"\n",
"*机器翻译*(machine translation)指的是\n",
"将序列从一种语言自动翻译成另一种语言。\n",
"事实上,这个研究领域可以追溯到数字计算机发明后不久的20世纪40年代,\n",
"特别是在第二次世界大战中使用计算机破解语言编码。\n",
"几十年来,在使用神经网络进行端到端学习的兴起之前,\n",
"统计学方法在这一领域一直占据主导地位\n",
" :cite:`Brown.Cocke.Della-Pietra.ea.1988,Brown.Cocke.Della-Pietra.ea.1990`。\n",
"因为*统计机器翻译*(statistical machine translation)涉及了\n",
"翻译模型和语言模型等组成部分的统计分析,\n",
"因此基于神经网络的方法通常被称为\n",
"*神经机器翻译*(neural machine translation),\n",
"用于将两种翻译模型区分开来。\n",
"\n",
"本书的关注点是神经网络机器翻译方法,强调的是端到端的学习。\n",
"与 :numref:`sec_language_model`中的语料库\n",
"是单一语言的语言模型问题存在不同,\n",
"机器翻译的数据集是由源语言和目标语言的文本序列对组成的。\n",
"因此,我们需要一种完全不同的方法来预处理机器翻译数据集,\n",
"而不是复用语言模型的预处理程序。\n",
"下面,我们看一下如何将预处理后的数据加载到小批量中用于训练。\n"
]
},
{
"cell_type": "code",
"execution_count": 1,
"id": "38f128f5",
"metadata": {
"execution": {
"iopub.execute_input": "2023-08-18T07:07:28.568184Z",
"iopub.status.busy": "2023-08-18T07:07:28.567577Z",
"iopub.status.idle": "2023-08-18T07:07:30.535405Z",
"shell.execute_reply": "2023-08-18T07:07:30.534582Z"
},
"origin_pos": 2,
"tab": [
"pytorch"
]
},
"outputs": [],
"source": [
"import os\n",
"import torch\n",
"from d2l import torch as d2l"
]
},
{
"cell_type": "markdown",
"id": "4f0458ec",
"metadata": {
"origin_pos": 5
},
"source": [
"## [**下载和预处理数据集**]\n",
"\n",
"首先,下载一个由[Tatoeba项目的双语句子对](http://www.manythings.org/anki/)\n",
"组成的“英-法”数据集,数据集中的每一行都是制表符分隔的文本序列对,\n",
"序列对由英文文本序列和翻译后的法语文本序列组成。\n",
"请注意,每个文本序列可以是一个句子,\n",
"也可以是包含多个句子的一个段落。\n",
"在这个将英语翻译成法语的机器翻译问题中,\n",
"英语是*源语言*(source language),\n",
"法语是*目标语言*(target language)。\n"
]
},
{
"cell_type": "code",
"execution_count": 2,
"id": "b3461d76",
"metadata": {
"execution": {
"iopub.execute_input": "2023-08-18T07:07:30.539676Z",
"iopub.status.busy": "2023-08-18T07:07:30.539042Z",
"iopub.status.idle": "2023-08-18T07:07:30.809623Z",
"shell.execute_reply": "2023-08-18T07:07:30.808727Z"
},
"origin_pos": 6,
"tab": [
"pytorch"
]
},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"Downloading ../data/fra-eng.zip from http://d2l-data.s3-accelerate.amazonaws.com/fra-eng.zip...\n"
]
},
{
"name": "stdout",
"output_type": "stream",
"text": [
"Go.\tVa !\n",
"Hi.\tSalut !\n",
"Run!\tCours !\n",
"Run!\tCourez !\n",
"Who?\tQui ?\n",
"Wow!\tÇa alors !\n",
"\n"
]
}
],
"source": [
"#@save\n",
"d2l.DATA_HUB['fra-eng'] = (d2l.DATA_URL + 'fra-eng.zip',\n",
" '94646ad1522d915e7b0f9296181140edcf86a4f5')\n",
"\n",
"#@save\n",
"def read_data_nmt():\n",
" \"\"\"载入“英语-法语”数据集\"\"\"\n",
" data_dir = d2l.download_extract('fra-eng')\n",
" with open(os.path.join(data_dir, 'fra.txt'), 'r',\n",
" encoding='utf-8') as f:\n",
" return f.read()\n",
"\n",
"raw_text = read_data_nmt()\n",
"print(raw_text[:75])"
]
},
{
"cell_type": "markdown",
"id": "de8c081f",
"metadata": {
"origin_pos": 7
},
"source": [
"下载数据集后,原始文本数据需要经过[**几个预处理步骤**]。\n",
"例如,我们用空格代替*不间断空格*(non-breaking space),\n",
"使用小写字母替换大写字母,并在单词和标点符号之间插入空格。\n"
]
},
{
"cell_type": "code",
"execution_count": 3,
"id": "114c461d",
"metadata": {
"execution": {
"iopub.execute_input": "2023-08-18T07:07:30.813835Z",
"iopub.status.busy": "2023-08-18T07:07:30.813273Z",
"iopub.status.idle": "2023-08-18T07:07:36.581927Z",
"shell.execute_reply": "2023-08-18T07:07:36.580959Z"
},
"origin_pos": 8,
"tab": [
"pytorch"
]
},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"go .\tva !\n",
"hi .\tsalut !\n",
"run !\tcours !\n",
"run !\tcourez !\n",
"who ?\tqui ?\n",
"wow !\tça alors !\n"
]
}
],
"source": [
"#@save\n",
"def preprocess_nmt(text):\n",
" \"\"\"预处理“英语-法语”数据集\"\"\"\n",
" def no_space(char, prev_char):\n",
" return char in set(',.!?') and prev_char != ' '\n",
"\n",
" # 使用空格替换不间断空格\n",
" # 使用小写字母替换大写字母\n",
" text = text.replace('\\u202f', ' ').replace('\\xa0', ' ').lower()\n",
" # 在单词和标点符号之间插入空格\n",
" out = [' ' + char if i > 0 and no_space(char, text[i - 1]) else char\n",
" for i, char in enumerate(text)]\n",
" return ''.join(out)\n",
"\n",
"text = preprocess_nmt(raw_text)\n",
"print(text[:80])"
]
},
{
"cell_type": "markdown",
"id": "e4048187",
"metadata": {
"origin_pos": 9
},
"source": [
"## [**词元化**]\n",
"\n",
"与 :numref:`sec_language_model`中的字符级词元化不同,\n",
"在机器翻译中,我们更喜欢单词级词元化\n",
"(最先进的模型可能使用更高级的词元化技术)。\n",
"下面的`tokenize_nmt`函数对前`num_examples`个文本序列对进行词元,\n",
"其中每个词元要么是一个词,要么是一个标点符号。\n",
"此函数返回两个词元列表:`source`和`target`:\n",
"`source[i]`是源语言(这里是英语)第$i$个文本序列的词元列表,\n",
"`target[i]`是目标语言(这里是法语)第$i$个文本序列的词元列表。\n"
]
},
{
"cell_type": "code",
"execution_count": 4,
"id": "cc08d1a5",
"metadata": {
"execution": {
"iopub.execute_input": "2023-08-18T07:07:36.585962Z",
"iopub.status.busy": "2023-08-18T07:07:36.585396Z",
"iopub.status.idle": "2023-08-18T07:07:37.431130Z",
"shell.execute_reply": "2023-08-18T07:07:37.430360Z"
},
"origin_pos": 10,
"tab": [
"pytorch"
]
},
"outputs": [
{
"data": {
"text/plain": [
"([['go', '.'],\n",
" ['hi', '.'],\n",
" ['run', '!'],\n",
" ['run', '!'],\n",
" ['who', '?'],\n",
" ['wow', '!']],\n",
" [['va', '!'],\n",
" ['salut', '!'],\n",
" ['cours', '!'],\n",
" ['courez', '!'],\n",
" ['qui', '?'],\n",
" ['ça', 'alors', '!']])"
]
},
"execution_count": 4,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"#@save\n",
"def tokenize_nmt(text, num_examples=None):\n",
" \"\"\"词元化“英语-法语”数据数据集\"\"\"\n",
" source, target = [], []\n",
" for i, line in enumerate(text.split('\\n')):\n",
" if num_examples and i > num_examples:\n",
" break\n",
" parts = line.split('\\t')\n",
" if len(parts) == 2:\n",
" source.append(parts[0].split(' '))\n",
" target.append(parts[1].split(' '))\n",
" return source, target\n",
"\n",
"source, target = tokenize_nmt(text)\n",
"source[:6], target[:6]"
]
},
{
"cell_type": "markdown",
"id": "1d8ecec6",
"metadata": {
"origin_pos": 11
},
"source": [
"让我们[**绘制每个文本序列所包含的词元数量的直方图**]。\n",
"在这个简单的“英-法”数据集中,大多数文本序列的词元数量少于$20$个。\n"
]
},
{
"cell_type": "code",
"execution_count": 5,
"id": "118a08f5",
"metadata": {
"execution": {
"iopub.execute_input": "2023-08-18T07:07:37.436349Z",
"iopub.status.busy": "2023-08-18T07:07:37.435794Z",
"iopub.status.idle": "2023-08-18T07:07:37.726273Z",
"shell.execute_reply": "2023-08-18T07:07:37.725467Z"
},
"origin_pos": 12,
"tab": [
"pytorch"
]
},
"outputs": [
{
"data": {
"image/svg+xml": [
"\n",
"\n",
"\n"
],
"text/plain": [
""
]
},
"metadata": {
"needs_background": "light"
},
"output_type": "display_data"
}
],
"source": [
"#@save\n",
"def show_list_len_pair_hist(legend, xlabel, ylabel, xlist, ylist):\n",
" \"\"\"绘制列表长度对的直方图\"\"\"\n",
" d2l.set_figsize()\n",
" _, _, patches = d2l.plt.hist(\n",
" [[len(l) for l in xlist], [len(l) for l in ylist]])\n",
" d2l.plt.xlabel(xlabel)\n",
" d2l.plt.ylabel(ylabel)\n",
" for patch in patches[1].patches:\n",
" patch.set_hatch('/')\n",
" d2l.plt.legend(legend)\n",
"\n",
"show_list_len_pair_hist(['source', 'target'], '# tokens per sequence',\n",
" 'count', source, target);"
]
},
{
"cell_type": "markdown",
"id": "26e67b14",
"metadata": {
"origin_pos": 13
},
"source": [
"## [**词表**]\n",
"\n",
"由于机器翻译数据集由语言对组成,\n",
"因此我们可以分别为源语言和目标语言构建两个词表。\n",
"使用单词级词元化时,词表大小将明显大于使用字符级词元化时的词表大小。\n",
"为了缓解这一问题,这里我们将出现次数少于2次的低频率词元\n",
"视为相同的未知(“<unk>”)词元。\n",
"除此之外,我们还指定了额外的特定词元,\n",
"例如在小批量时用于将序列填充到相同长度的填充词元(“<pad>”),\n",
"以及序列的开始词元(“<bos>”)和结束词元(“<eos>”)。\n",
"这些特殊词元在自然语言处理任务中比较常用。\n"
]
},
{
"cell_type": "code",
"execution_count": 6,
"id": "1179a522",
"metadata": {
"execution": {
"iopub.execute_input": "2023-08-18T07:07:37.732422Z",
"iopub.status.busy": "2023-08-18T07:07:37.731864Z",
"iopub.status.idle": "2023-08-18T07:07:37.957959Z",
"shell.execute_reply": "2023-08-18T07:07:37.957157Z"
},
"origin_pos": 14,
"tab": [
"pytorch"
]
},
"outputs": [
{
"data": {
"text/plain": [
"10012"
]
},
"execution_count": 6,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"src_vocab = d2l.Vocab(source, min_freq=2,\n",
" reserved_tokens=['', '', ''])\n",
"len(src_vocab)"
]
},
{
"cell_type": "markdown",
"id": "216c91b5",
"metadata": {
"origin_pos": 15
},
"source": [
"## 加载数据集\n",
":label:`subsec_mt_data_loading`\n",
"\n",
"回想一下,语言模型中的[**序列样本都有一个固定的长度**],\n",
"无论这个样本是一个句子的一部分还是跨越了多个句子的一个片断。\n",
"这个固定长度是由 :numref:`sec_language_model`中的\n",
"`num_steps`(时间步数或词元数量)参数指定的。\n",
"在机器翻译中,每个样本都是由源和目标组成的文本序列对,\n",
"其中的每个文本序列可能具有不同的长度。\n",
"\n",
"为了提高计算效率,我们仍然可以通过*截断*(truncation)和\n",
"*填充*(padding)方式实现一次只处理一个小批量的文本序列。\n",
"假设同一个小批量中的每个序列都应该具有相同的长度`num_steps`,\n",
"那么如果文本序列的词元数目少于`num_steps`时,\n",
"我们将继续在其末尾添加特定的“<pad>”词元,\n",
"直到其长度达到`num_steps`;\n",
"反之,我们将截断文本序列时,只取其前`num_steps` 个词元,\n",
"并且丢弃剩余的词元。这样,每个文本序列将具有相同的长度,\n",
"以便以相同形状的小批量进行加载。\n",
"\n",
"如前所述,下面的`truncate_pad`函数将(**截断或填充文本序列**)。\n"
]
},
{
"cell_type": "code",
"execution_count": 7,
"id": "42aa524a",
"metadata": {
"execution": {
"iopub.execute_input": "2023-08-18T07:07:37.962377Z",
"iopub.status.busy": "2023-08-18T07:07:37.961250Z",
"iopub.status.idle": "2023-08-18T07:07:37.968643Z",
"shell.execute_reply": "2023-08-18T07:07:37.967892Z"
},
"origin_pos": 16,
"tab": [
"pytorch"
]
},
"outputs": [
{
"data": {
"text/plain": [
"[47, 4, 1, 1, 1, 1, 1, 1, 1, 1]"
]
},
"execution_count": 7,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"#@save\n",
"def truncate_pad(line, num_steps, padding_token):\n",
" \"\"\"截断或填充文本序列\"\"\"\n",
" if len(line) > num_steps:\n",
" return line[:num_steps] # 截断\n",
" return line + [padding_token] * (num_steps - len(line)) # 填充\n",
"\n",
"truncate_pad(src_vocab[source[0]], 10, src_vocab[''])"
]
},
{
"cell_type": "markdown",
"id": "2974b660",
"metadata": {
"origin_pos": 17
},
"source": [
"现在我们定义一个函数,可以将文本序列\n",
"[**转换成小批量数据集用于训练**]。\n",
"我们将特定的“<eos>”词元添加到所有序列的末尾,\n",
"用于表示序列的结束。\n",
"当模型通过一个词元接一个词元地生成序列进行预测时,\n",
"生成的“<eos>”词元说明完成了序列输出工作。\n",
"此外,我们还记录了每个文本序列的长度,\n",
"统计长度时排除了填充词元,\n",
"在稍后将要介绍的一些模型会需要这个长度信息。\n"
]
},
{
"cell_type": "code",
"execution_count": 8,
"id": "db17050b",
"metadata": {
"execution": {
"iopub.execute_input": "2023-08-18T07:07:37.973483Z",
"iopub.status.busy": "2023-08-18T07:07:37.972873Z",
"iopub.status.idle": "2023-08-18T07:07:37.978080Z",
"shell.execute_reply": "2023-08-18T07:07:37.977330Z"
},
"origin_pos": 18,
"tab": [
"pytorch"
]
},
"outputs": [],
"source": [
"#@save\n",
"def build_array_nmt(lines, vocab, num_steps):\n",
" \"\"\"将机器翻译的文本序列转换成小批量\"\"\"\n",
" lines = [vocab[l] for l in lines]\n",
" lines = [l + [vocab['']] for l in lines]\n",
" array = torch.tensor([truncate_pad(\n",
" l, num_steps, vocab['']) for l in lines])\n",
" valid_len = (array != vocab['']).type(torch.int32).sum(1)\n",
" return array, valid_len"
]
},
{
"cell_type": "markdown",
"id": "85e2af67",
"metadata": {
"origin_pos": 19
},
"source": [
"## [**训练模型**]\n",
"\n",
"最后,我们定义`load_data_nmt`函数来返回数据迭代器,\n",
"以及源语言和目标语言的两种词表。\n"
]
},
{
"cell_type": "code",
"execution_count": 9,
"id": "8addcc51",
"metadata": {
"execution": {
"iopub.execute_input": "2023-08-18T07:07:37.982873Z",
"iopub.status.busy": "2023-08-18T07:07:37.982349Z",
"iopub.status.idle": "2023-08-18T07:07:37.988101Z",
"shell.execute_reply": "2023-08-18T07:07:37.987357Z"
},
"origin_pos": 20,
"tab": [
"pytorch"
]
},
"outputs": [],
"source": [
"#@save\n",
"def load_data_nmt(batch_size, num_steps, num_examples=600):\n",
" \"\"\"返回翻译数据集的迭代器和词表\"\"\"\n",
" text = preprocess_nmt(read_data_nmt())\n",
" source, target = tokenize_nmt(text, num_examples)\n",
" src_vocab = d2l.Vocab(source, min_freq=2,\n",
" reserved_tokens=['', '', ''])\n",
" tgt_vocab = d2l.Vocab(target, min_freq=2,\n",
" reserved_tokens=['', '', ''])\n",
" src_array, src_valid_len = build_array_nmt(source, src_vocab, num_steps)\n",
" tgt_array, tgt_valid_len = build_array_nmt(target, tgt_vocab, num_steps)\n",
" data_arrays = (src_array, src_valid_len, tgt_array, tgt_valid_len)\n",
" data_iter = d2l.load_array(data_arrays, batch_size)\n",
" return data_iter, src_vocab, tgt_vocab"
]
},
{
"cell_type": "markdown",
"id": "6afba4ea",
"metadata": {
"origin_pos": 21
},
"source": [
"下面我们[**读出“英语-法语”数据集中的第一个小批量数据**]。\n"
]
},
{
"cell_type": "code",
"execution_count": 10,
"id": "90df834d",
"metadata": {
"execution": {
"iopub.execute_input": "2023-08-18T07:07:37.992732Z",
"iopub.status.busy": "2023-08-18T07:07:37.992204Z",
"iopub.status.idle": "2023-08-18T07:07:43.780428Z",
"shell.execute_reply": "2023-08-18T07:07:43.779613Z"
},
"origin_pos": 22,
"tab": [
"pytorch"
]
},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"X: tensor([[ 7, 43, 4, 3, 1, 1, 1, 1],\n",
" [44, 23, 4, 3, 1, 1, 1, 1]], dtype=torch.int32)\n",
"X的有效长度: tensor([4, 4])\n",
"Y: tensor([[ 6, 7, 40, 4, 3, 1, 1, 1],\n",
" [ 0, 5, 3, 1, 1, 1, 1, 1]], dtype=torch.int32)\n",
"Y的有效长度: tensor([5, 3])\n"
]
}
],
"source": [
"train_iter, src_vocab, tgt_vocab = load_data_nmt(batch_size=2, num_steps=8)\n",
"for X, X_valid_len, Y, Y_valid_len in train_iter:\n",
" print('X:', X.type(torch.int32))\n",
" print('X的有效长度:', X_valid_len)\n",
" print('Y:', Y.type(torch.int32))\n",
" print('Y的有效长度:', Y_valid_len)\n",
" break"
]
},
{
"cell_type": "markdown",
"id": "df773107",
"metadata": {
"origin_pos": 23
},
"source": [
"## 小结\n",
"\n",
"* 机器翻译指的是将文本序列从一种语言自动翻译成另一种语言。\n",
"* 使用单词级词元化时的词表大小,将明显大于使用字符级词元化时的词表大小。为了缓解这一问题,我们可以将低频词元视为相同的未知词元。\n",
"* 通过截断和填充文本序列,可以保证所有的文本序列都具有相同的长度,以便以小批量的方式加载。\n",
"\n",
"## 练习\n",
"\n",
"1. 在`load_data_nmt`函数中尝试不同的`num_examples`参数值。这对源语言和目标语言的词表大小有何影响?\n",
"1. 某些语言(例如中文和日语)的文本没有单词边界指示符(例如空格)。对于这种情况,单词级词元化仍然是个好主意吗?为什么?\n"
]
},
{
"cell_type": "markdown",
"id": "439397fa",
"metadata": {
"origin_pos": 25,
"tab": [
"pytorch"
]
},
"source": [
"[Discussions](https://discuss.d2l.ai/t/2776)\n"
]
}
],
"metadata": {
"language_info": {
"name": "python"
},
"required_libs": []
},
"nbformat": 4,
"nbformat_minor": 5
}