58 lines
2.6 KiB
Plaintext
58 lines
2.6 KiB
Plaintext
{
|
||
"cells": [
|
||
{
|
||
"cell_type": "markdown",
|
||
"id": "e2119702",
|
||
"metadata": {
|
||
"origin_pos": 0
|
||
},
|
||
"source": [
|
||
"# 注意力机制\n",
|
||
":label:`chap_attention`\n",
|
||
"\n",
|
||
"灵长类动物的视觉系统接受了大量的感官输入,\n",
|
||
"这些感官输入远远超过了大脑能够完全处理的程度。\n",
|
||
"然而,并非所有刺激的影响都是相等的。\n",
|
||
"意识的聚集和专注使灵长类动物能够在复杂的视觉环境中将注意力引向感兴趣的物体,例如猎物和天敌。\n",
|
||
"只关注一小部分信息的能力对进化更加有意义,使人类得以生存和成功。\n",
|
||
"\n",
|
||
"自19世纪以来,科学家们一直致力于研究认知神经科学领域的注意力。\n",
|
||
"本章的很多章节将涉及到一些研究。\n",
|
||
"\n",
|
||
"首先回顾一个经典注意力框架,解释如何在视觉场景中展开注意力。\n",
|
||
"受此框架中的*注意力提示*(attention cues)的启发,\n",
|
||
"我们将设计能够利用这些注意力提示的模型。\n",
|
||
"1964年的Nadaraya-Waston核回归(kernel regression)正是具有\n",
|
||
"*注意力机制*(attention mechanism)的机器学习的简单演示。\n",
|
||
"\n",
|
||
"然后继续介绍的是注意力函数,它们在深度学习的注意力模型设计中被广泛使用。\n",
|
||
"具体来说,我们将展示如何使用这些函数来设计*Bahdanau注意力*。\n",
|
||
"Bahdanau注意力是深度学习中的具有突破性价值的注意力模型,它双向对齐并且可以微分。\n",
|
||
"\n",
|
||
"最后将描述仅仅基于注意力机制的*Transformer*架构,\n",
|
||
"该架构中使用了*多头注意力*(multi-head attention)\n",
|
||
"和*自注意力*(self-attention)。\n",
|
||
"自2017年横空出世,Transformer一直都普遍存在于现代的深度学习应用中,\n",
|
||
"例如语言、视觉、语音和强化学习领域。\n",
|
||
"\n",
|
||
":begin_tab:toc\n",
|
||
" - [attention-cues](attention-cues.ipynb)\n",
|
||
" - [nadaraya-waston](nadaraya-waston.ipynb)\n",
|
||
" - [attention-scoring-functions](attention-scoring-functions.ipynb)\n",
|
||
" - [bahdanau-attention](bahdanau-attention.ipynb)\n",
|
||
" - [multihead-attention](multihead-attention.ipynb)\n",
|
||
" - [self-attention-and-positional-encoding](self-attention-and-positional-encoding.ipynb)\n",
|
||
" - [transformer](transformer.ipynb)\n",
|
||
":end_tab:\n"
|
||
]
|
||
}
|
||
],
|
||
"metadata": {
|
||
"language_info": {
|
||
"name": "python"
|
||
},
|
||
"required_libs": []
|
||
},
|
||
"nbformat": 4,
|
||
"nbformat_minor": 5
|
||
} |