{ "cells": [ { "cell_type": "markdown", "metadata": {}, "source": [ "# BLEU 详解\n", "\n", "\n", "\n", "## 大纲\n", "\n", "- 简介\n", "- 论文\n", "- 代码\n", "- 思考" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "## 简介 \n", "\n", "### What\n", "\n", "BLEU 全称 bilingual evaluation understudy,是自动评价翻译效果的一种方法。 \n", "特点是:快速、廉价和不依赖语言,并且与人类评估的结果高度一致。 \n", "\n", "$$BLEU = BP * exp(\\sum_{n=1}^{N} w_n logP_n)$$ \n", "如果候选译文的长度 c 大于参考译文最接近候选译文的长度 r,$BP = 1$;否则 $BP = e^{1-r/c}$\n", "\n", "### How\n", "\n", "BLEU 通过计算候选译文和参考译文的 N-gram 实现对翻译结果的评价。步骤和细节如下: \n", "- 计算 N-gram 的 P 值 \n", " - 对每个 N,统计候选译文中每个 N-gram 出现的次数 \n", " - 统计给定几个参考译文中出现的最大次数,取与上步相比次数的较小值\n", " - 第二步求和除以第一步求和,得到该 N-gram 的 P 值\n", "- 获取 BP 值\n", "- 得到 BLEU 值\n", "\n", "\n", "### Why\n", "\n", "The closer a machine translation is to a professional human translation, the better it is.\n", "- A translation using the same words (1-grams) as in the references tends to satisfy adequacy. The longer n-gram matches account for fluency.\n", "- brevity penalty" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "## 论文\n", "\n", "[bleu.dvi](http://www.aclweb.org/anthology/P02-1040.pdf) \n", "主要梳理作者的思路。 \n", "\n", "\n", "### Introduction\n", "\n", "\n", "#### Rationale\n", "\n", "主要介绍了背景和原因,用人力去评估翻译效果耗时耗力。 \n", "\n", "#### Viewpoint\n", "\n", "介绍了主要观点和文章结构,核心观点是:机器翻译和专家翻译的越接近越好。 \n", "所以需要评价机器翻译和专家翻译结果的相似性。 \n", "而相似性和单词的选择与顺序有关,本文的主要观点是使用一定长度的短语评估。 \n", "\n", "\n", "### The Baseline BLEU Metric\n", "\n", "介绍了基本方法:计算 n-gram。\n", "\n", "#### Modified n-gram precision\n", "\n", "- 如何计算 n-gram(unigram 为例) \n", "- n-gram 捕获了用词的合适性和流畅性\n", "\n", "##### Modified n-gram precision on blocks of text\n", "\n", "- Pn 计算公式\n", "- 计算不同的 n-gram\n", "\n", "##### Ranking systems using only modified n-gram precision\n", "\n", "- 从 1-gram 到 4-gram 机器和人翻译结果的对比\n", "- 每个 n-gram 上,机器与人表现具有一致性\n", " - 随着 n 增加,Precision 下降\n", " - 人的 Precision 均超过机器,而且随着 n 增加,人的 Precision 超机器更多\n", "\n", "##### Combining the modified n-gram precisions\n", "\n", "因为 4-gram 下机器的准确率已经很低了,所以最多取到 4。\n", "\n", "\n", "#### Sentence length\n", "\n", "- 考虑候选译文长度的问题,不能太长或太短。 \n", " - N-gram 惩罚『不好』的词(候选译文中没出现的)\n", " - 同时,惩罚那些出现次数超过候选译文的 N-gram\n", " - 但是,对那些短的很荒唐的句子,但却出现在候选译文中的情况无能为力\n", "\n", "##### The trouble with recall\n", "\n", "- 传统的召回率解决长度问题\n", " - 但如果出现了所有候选译文中的词(长度很长),整体效果不好\n", "\n", "##### Sentence brevity penalty\n", "\n", "- 长度(过短)惩罚(BP)\n", " - 太长的问题 n-gram 已经搞定\n", " - 太短的问题 brevity penalty\n", "\n", ">With this brevity penalty in place, a high-scoring candidate translation must now match the reference translations in length, in word choice, and in word order.\n", "\n", "\n", "#### BLEU details\n", "\n", "BP 和 BLEU 的表达式细节\n", "\n", "### The BLEU Evaluation\n", "\n", "- 500 句(40 个故事),两个参考译文,两个人+三个 MT 共 五个 systems:人的 BLEU 得分比机器得分高\n", "- 问题:\n", " - 这个不同可靠吗?\n", " - BLEU 不一致的地方是什么?\n", " - 如果用另外 500 句还能得出类似结果吗?\n", "- 分成 20 组,每组 25 句,25 句 bleu 平均后得到 20 个数据\n", " - 每组结果应该和整体一致(实际如此)\n", " - 两两 t 检验应该不相同(原假设相同,期望拒绝原假设,实际如此)\n", "- 需要多少个参考翻译?\n", " - 随机选择四个参考翻译中的一个作为 40 个故事中的每一个的单个参考模拟单参考测试语料库,确保了一定程度的风格变化\n", " - 排序结果与多参考翻译结果一致\n", " \n", "\n", "### The Human Evaluation\n", "\n", "为了检验结果的可靠性,用真人进行评估(得分从 1-5),还是两组不同类型的人;数据取自 500 句,共取出 50 句,形成 250 对。 \n", "- 单语组: 10 个本土说英语的;只评估可读性和流畅性\n", "- 双语组: 10 个在本土生活多年的中国人;所有人都不是翻译专家\n", "- 最后还有 t 检验\n", "\n", "两两 t 检验,在 95% 的置信区间上,单语组和双语组均通过检验(每两组都有差异),结果排序与之前一致\n", "\n", "\n", "### BLEU vs The Human Evaluation \n", "\n", "- 与 BLEU 分数(两参考译文)线性回归(五个),单语言组相关系数 0.99,双语言组相关系数 0.96\n", "- 把最差的 system 作为参考点,综合考虑 BLEU、单语言组、多语言组评估在五个系统上的表现(各组内部分别 0-1 标准化后的结果)\n", " - BLEU 和单语言组高度相关\n", " - 双语组得分较高并且与其他两组有一些差别\n", " - MT 系统和人类相差明显\n", "\n", "\n", "### Conclusion\n", "\n", "BLEU’s strength is that it correlates highly with human judgments by averaging out individual sentence judgment errors over a test corpus rather than attempting to divine the exact human judgment for every sentence: quantity leads to quality. \n", "通过对测试数据中的单个句子判断错误进行平均而不是试图对每个句子进行人类的精确判断进行平均,与人类的评估高度相关:数量导致质量。" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "## 代码 " ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "### NLTK\n", "\n", "[nltk.align.bleu_score — NLTK 3.0 documentation](http://www.nltk.org/_modules/nltk/align/bleu_score.html) \n", "比较直观" ] }, { "cell_type": "code", "execution_count": 3, "metadata": { "collapsed": true }, "outputs": [], "source": [ "import collections\n", "import nltk\n", "import math\n", "from nltk.tokenize import word_tokenize\n", "from collections import Counter\n", "from nltk.util import ngrams" ] }, { "cell_type": "code", "execution_count": 4, "metadata": { "collapsed": true }, "outputs": [], "source": [ "weights = [0.25, 0.25, 0.25, 0.25]\n", "candidate1 = ['It', 'is', 'a', 'guide', 'to', 'action', 'which','ensures', 'that', 'the', 'military', 'always','obeys', 'the', 'commands', 'of', 'the', 'party']\n", "candidate2 = ['It', 'is', 'to', 'insure', 'the', 'troops','forever', 'hearing', 'the', 'activity', 'guidebook','that', 'party', 'direct']\n", "reference1 = ['It', 'is', 'a', 'guide', 'to', 'action', 'that','ensures', 'that', 'the', 'military', 'will', 'forever','heed', 'Party', 'commands']\n", "reference2 = ['It', 'is', 'the', 'guiding', 'principle', 'which','guarantees', 'the', 'military', 'forces', 'always','being', 'under', 'the', 'command', 'of', 'the','Party']\n", "reference3 = ['It', 'is', 'the', 'practical', 'guide', 'for', 'the','army', 'always', 'to', 'heed', 'the', 'directions','of', 'the', 'party']" ] }, { "cell_type": "code", "execution_count": 6, "metadata": { "collapsed": true }, "outputs": [], "source": [ "references = [reference1, reference2, reference3]" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "#### Pn" ] }, { "cell_type": "code", "execution_count": 7, "metadata": { "collapsed": true }, "outputs": [], "source": [ "def modified_precision(candidate, references, n):\n", " counts = Counter(ngrams(candidate, n))\n", "\n", " if not counts:\n", " return 0\n", "\n", " max_counts = {}\n", " # 遍历每个 reference\n", " for reference in references:\n", " reference_counts = Counter(ngrams(reference, n))\n", " # 获取 references 中 N-gram 的最大值\n", " for ngram in counts:\n", " max_counts[ngram] = max(max_counts.get(ngram, 0), reference_counts[ngram])\n", "\n", " clipped_counts = dict((ngram, min(count, max_counts[ngram])) for ngram, count in counts.items())\n", "\n", " return sum(clipped_counts.values()) / sum(counts.values())" ] }, { "cell_type": "code", "execution_count": 8, "metadata": { "collapsed": true }, "outputs": [], "source": [ "counts = Counter(ngrams(candidate1, 1))" ] }, { "cell_type": "code", "execution_count": 9, "metadata": {}, "outputs": [ { "name": "stdout", "output_type": "stream", "text": [ "('a',)\n", "('to',)\n", "('that',)\n", "('commands',)\n", "('party',)\n", "('action',)\n", "('always',)\n", "('is',)\n", "('obeys',)\n", "('military',)\n", "('It',)\n", "('guide',)\n", "('the',)\n", "('of',)\n", "('which',)\n", "('ensures',)\n" ] } ], "source": [ "for ngram in counts:\n", " print(ngram)" ] }, { "cell_type": "code", "execution_count": 10, "metadata": { "collapsed": true }, "outputs": [], "source": [ "max_counts = {}\n", "for reference in references:\n", " reference_counts = Counter(ngrams(reference, 1))\n", " for ngram in counts:\n", " max_counts[ngram] = max(max_counts.get(ngram, 0), reference_counts[ngram])" ] }, { "cell_type": "code", "execution_count": 11, "metadata": { "collapsed": true }, "outputs": [], "source": [ "clipped_counts = dict((ngram, min(count, max_counts[ngram])) for ngram, count in counts.items())" ] }, { "cell_type": "code", "execution_count": 12, "metadata": {}, "outputs": [ { "data": { "text/plain": [ "{('It',): 1,\n", " ('a',): 1,\n", " ('action',): 1,\n", " ('always',): 1,\n", " ('commands',): 1,\n", " ('ensures',): 1,\n", " ('guide',): 1,\n", " ('is',): 1,\n", " ('military',): 1,\n", " ('obeys',): 0,\n", " ('of',): 1,\n", " ('party',): 1,\n", " ('that',): 1,\n", " ('the',): 3,\n", " ('to',): 1,\n", " ('which',): 1}" ] }, "execution_count": 12, "metadata": {}, "output_type": "execute_result" } ], "source": [ "clipped_counts" ] }, { "cell_type": "code", "execution_count": 13, "metadata": {}, "outputs": [ { "data": { "text/plain": [ "0.9444444444444444" ] }, "execution_count": 13, "metadata": {}, "output_type": "execute_result" } ], "source": [ "sum(clipped_counts.values()) / sum(counts.values())" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "#### BP" ] }, { "cell_type": "code", "execution_count": 14, "metadata": { "collapsed": true }, "outputs": [], "source": [ "def brevity_penalty(candidate, references):\n", "\n", " c = len(candidate)\n", " ref_lens = (len(reference) for reference in references)\n", " # 这里有个知识点是Python中元组是可以比较的,如(0,1)>(1,0)返回False,这里利用元组比较实现了选取参考翻译中长度最接近候选翻译的句子,\n", " # 当最接近的参考翻译有多个时,选取最短的。例如候选翻译长度是10,两个参考翻译长度分别为9和11,则r=9.\n", " # 元组可以比较多个变量\n", " r = min(ref_lens, key=lambda ref_len: (abs(ref_len - c), ref_len))\n", " print ('r:',r)\n", "\n", " if c > r:\n", " return 1\n", " else:\n", " return math.exp(1 - r / c)" ] }, { "cell_type": "code", "execution_count": 15, "metadata": { "scrolled": true }, "outputs": [ { "name": "stdout", "output_type": "stream", "text": [ "r: 16\n" ] }, { "data": { "text/plain": [ "0.8668778997501817" ] }, "execution_count": 15, "metadata": {}, "output_type": "execute_result" } ], "source": [ "brevity_penalty(candidate2, references)" ] }, { "cell_type": "code", "execution_count": 16, "metadata": {}, "outputs": [ { "data": { "text/plain": [ "18" ] }, "execution_count": 16, "metadata": {}, "output_type": "execute_result" } ], "source": [ "c = len(candidate1)\n", "c" ] }, { "cell_type": "code", "execution_count": 17, "metadata": { "collapsed": true }, "outputs": [], "source": [ "ref_lens = (len(reference) for reference in references)" ] }, { "cell_type": "code", "execution_count": 18, "metadata": { "scrolled": true }, "outputs": [ { "data": { "text/plain": [ "[16, 18, 16]" ] }, "execution_count": 18, "metadata": {}, "output_type": "execute_result" } ], "source": [ "list(ref_lens)" ] }, { "cell_type": "code", "execution_count": 19, "metadata": { "scrolled": true }, "outputs": [ { "data": { "text/plain": [ "(1, 10)" ] }, "execution_count": 19, "metadata": {}, "output_type": "execute_result" } ], "source": [ "min((1,10),(1,16))" ] }, { "cell_type": "code", "execution_count": 20, "metadata": {}, "outputs": [ { "name": "stdout", "output_type": "stream", "text": [ "r: 18\n" ] }, { "data": { "text/plain": [ "0.9444444444444444" ] }, "execution_count": 20, "metadata": {}, "output_type": "execute_result" } ], "source": [ "brevity_penalty(candidate1, references) * modified_precision(candidate1,[reference1, reference2, reference3], 1)" ] }, { "cell_type": "code", "execution_count": 21, "metadata": {}, "outputs": [ { "data": { "text/plain": [ "0.9444444444444444" ] }, "execution_count": 21, "metadata": {}, "output_type": "execute_result" } ], "source": [ "modified_precision(candidate1,[reference1, reference2, reference3], 1)" ] }, { "cell_type": "code", "execution_count": 22, "metadata": {}, "outputs": [ { "data": { "text/plain": [ "[(1,), (2,), (3,), (4,), (5,)]" ] }, "execution_count": 22, "metadata": {}, "output_type": "execute_result" } ], "source": [ "# nltk.ngrams\n", "list(ngrams([1,2,3,4,5],1))" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "#### BLEU" ] }, { "cell_type": "code", "execution_count": 25, "metadata": { "collapsed": true }, "outputs": [], "source": [ "def bleu(candidate, references, weights):\n", " p_ns = (\n", " modified_precision(candidate, references, i)\n", " for i, _ in enumerate(weights, start=1)\n", " )\n", "\n", " try:\n", " s = math.fsum(w * math.log(p_n) for w, p_n in zip(weights, p_ns))\n", " except ValueError:\n", " # some p_ns is 0\n", " return 0\n", "\n", " bp = brevity_penalty(candidate, references)\n", " return bp * math.exp(s)" ] }, { "cell_type": "code", "execution_count": 26, "metadata": {}, "outputs": [ { "name": "stdout", "output_type": "stream", "text": [ "r: 18\n" ] }, { "data": { "text/plain": [ "0.5045666840058485" ] }, "execution_count": 26, "metadata": {}, "output_type": "execute_result" } ], "source": [ "bleu(candidate1, references, weights)" ] }, { "cell_type": "code", "execution_count": 27, "metadata": {}, "outputs": [ { "data": { "text/plain": [ "0" ] }, "execution_count": 27, "metadata": {}, "output_type": "execute_result" } ], "source": [ "bleu(candidate2, references, weights)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "### GNMT\n", "\n", "Google 发布的 NMT: [nmt/bleu.py at master · tensorflow/nmt](https://github.com/tensorflow/nmt/blob/master/nmt/scripts/bleu.py) \n", "抽象的很好。" ] }, { "cell_type": "code", "execution_count": 60, "metadata": { "collapsed": true }, "outputs": [], "source": [ "def get_ngrams(segment, max_order):\n", " \n", " \"\"\"Extracts all n-grams upto a given maximum order from an input segment.\n", "\n", " Args:\n", " segment: text segment from which n-grams will be extracted.\n", " max_order: maximum length in tokens of the n-grams returned by this\n", " methods.\n", "\n", " Returns:\n", " The Counter containing all n-grams upto max_order in segment\n", " with a count of how many times each n-gram occurred.\n", " \"\"\"\n", " ngram_counts = collections.Counter()\n", " for order in range(1, max_order + 1):\n", " for i in range(0, len(segment) - order + 1):\n", " ngram = tuple(segment[i:i+order])\n", " ngram_counts[ngram] += 1\n", " return ngram_counts" ] }, { "cell_type": "code", "execution_count": 56, "metadata": {}, "outputs": [ { "data": { "text/plain": [ "Counter({('It',): 1,\n", " ('It', 'is'): 1,\n", " ('It', 'is', 'a'): 1,\n", " ('It', 'is', 'a', 'guide'): 1,\n", " ('a',): 1,\n", " ('a', 'guide'): 1,\n", " ('a', 'guide', 'to'): 1,\n", " ('a', 'guide', 'to', 'action'): 1,\n", " ('action',): 1,\n", " ('action', 'which'): 1,\n", " ('action', 'which', 'ensures'): 1,\n", " ('action', 'which', 'ensures', 'that'): 1,\n", " ('always',): 1,\n", " ('always', 'obeys'): 1,\n", " ('always', 'obeys', 'the'): 1,\n", " ('always', 'obeys', 'the', 'commands'): 1,\n", " ('commands',): 1,\n", " ('commands', 'of'): 1,\n", " ('commands', 'of', 'the'): 1,\n", " ('commands', 'of', 'the', 'party'): 1,\n", " ('ensures',): 1,\n", " ('ensures', 'that'): 1,\n", " ('ensures', 'that', 'the'): 1,\n", " ('ensures', 'that', 'the', 'military'): 1,\n", " ('guide',): 1,\n", " ('guide', 'to'): 1,\n", " ('guide', 'to', 'action'): 1,\n", " ('guide', 'to', 'action', 'which'): 1,\n", " ('is',): 1,\n", " ('is', 'a'): 1,\n", " ('is', 'a', 'guide'): 1,\n", " ('is', 'a', 'guide', 'to'): 1,\n", " ('military',): 1,\n", " ('military', 'always'): 1,\n", " ('military', 'always', 'obeys'): 1,\n", " ('military', 'always', 'obeys', 'the'): 1,\n", " ('obeys',): 1,\n", " ('obeys', 'the'): 1,\n", " ('obeys', 'the', 'commands'): 1,\n", " ('obeys', 'the', 'commands', 'of'): 1,\n", " ('of',): 1,\n", " ('of', 'the'): 1,\n", " ('of', 'the', 'party'): 1,\n", " ('party',): 1,\n", " ('that',): 1,\n", " ('that', 'the'): 1,\n", " ('that', 'the', 'military'): 1,\n", " ('that', 'the', 'military', 'always'): 1,\n", " ('the',): 3,\n", " ('the', 'commands'): 1,\n", " ('the', 'commands', 'of'): 1,\n", " ('the', 'commands', 'of', 'the'): 1,\n", " ('the', 'military'): 1,\n", " ('the', 'military', 'always'): 1,\n", " ('the', 'military', 'always', 'obeys'): 1,\n", " ('the', 'party'): 1,\n", " ('to',): 1,\n", " ('to', 'action'): 1,\n", " ('to', 'action', 'which'): 1,\n", " ('to', 'action', 'which', 'ensures'): 1,\n", " ('which',): 1,\n", " ('which', 'ensures'): 1,\n", " ('which', 'ensures', 'that'): 1,\n", " ('which', 'ensures', 'that', 'the'): 1})" ] }, "execution_count": 56, "metadata": {}, "output_type": "execute_result" } ], "source": [ "get_ngrams(candidate1, 4)" ] }, { "cell_type": "code", "execution_count": 310, "metadata": { "collapsed": true }, "outputs": [], "source": [ "def compute_bleu(reference_corpus, translation_corpus, max_order=4,\n", " smooth=False):\n", " \"\"\"Computes BLEU score of translated segments against one or more references.\n", "\n", " Args:\n", " reference_corpus: list of lists of references for each translation. Each\n", " reference should be tokenized into a list of tokens.\n", " translation_corpus: list of translations to score. Each translation\n", " should be tokenized into a list of tokens.\n", " max_order: Maximum n-gram order to use when computing BLEU score.\n", " smooth: Whether or not to apply Lin et al. 2004 smoothing.\n", "\n", " Returns:\n", " 3-Tuple with the BLEU score, n-gram precisions, geometric mean of n-gram\n", " precisions and brevity penalty.\n", " \"\"\"\n", " matches_by_order = [0] * max_order\n", " possible_matches_by_order = [0] * max_order\n", " reference_length = 0\n", " translation_length = 0\n", " # references 和 translation 放在一起\n", " # 可以同时处理多个翻译结果\n", " # zip(X,X)[0] 就是 references,[1] 是 translation\n", " for (references, translation) in zip(reference_corpus, translation_corpus):\n", " # 长度最短的\n", " reference_length += min(len(r) for r in references)\n", " translation_length += len(translation)\n", "\n", " merged_ref_ngram_counts = collections.Counter()\n", " for reference in references:\n", " # or 操作,获取所有 references 可能的 ngram\n", " # 取或后,会取到较大的频率\n", " merged_ref_ngram_counts |= get_ngrams(reference, max_order)\n", " translation_ngram_counts = get_ngrams(translation, max_order)\n", " # 取两者交集\n", " # 取交集会取到较小者\n", " overlap = translation_ngram_counts & merged_ref_ngram_counts\n", " for ngram in overlap:\n", " # 每个 n-gram P 值的分子,即 references 最大与候选译文比较较小值之和\n", " matches_by_order[len(ngram)-1] += overlap[ngram]\n", " for order in range(1, max_order+1):\n", " # 每个 n-gram P 值的分母,即候选译文 ngram 的数量求和\n", " possible_matches = len(translation) - order + 1\n", " if possible_matches > 0:\n", " possible_matches_by_order[order-1] += possible_matches\n", "\n", " precisions = [0] * max_order\n", " for i in range(0, max_order):\n", " if smooth:\n", " precisions[i] = ((matches_by_order[i] + 1.) /\n", " (possible_matches_by_order[i] + 1.))\n", " else:\n", " if possible_matches_by_order[i] > 0:\n", " precisions[i] = (float(matches_by_order[i]) / possible_matches_by_order[i])\n", " # 考虑了 ngram 不存在的时候(句子太短),或者 n 太大,这种时候 bleu 也为 0\n", " else:\n", " precisions[i] = 0.0\n", "\n", " if min(precisions) > 0:\n", " p_log_sum = sum((1. / max_order) * math.log(p) for p in precisions)\n", " geo_mean = math.exp(p_log_sum)\n", " else:\n", " geo_mean = 0\n", "\n", " ratio = float(translation_length) / reference_length\n", " # ratio = c/r, 候选/参考\n", " # r 应该选取长度最接近候选翻译的,此处选择最小的,如果 > 1,那 bp 肯定为 1;如果 < 1,那最小的也就是最接近候选译文的\n", " if ratio > 1.0:\n", " bp = 1.\n", " else:\n", " bp = math.exp(1 - 1. / ratio)\n", "\n", " bleu = geo_mean * bp\n", "\n", " return (bleu, precisions, bp, ratio, translation_length, reference_length)" ] }, { "cell_type": "code", "execution_count": 215, "metadata": { "scrolled": false }, "outputs": [ { "data": { "text/plain": [ "(0.5045666840058485,\n", " [0.9444444444444444, 0.5882352941176471, 0.4375, 0.26666666666666666],\n", " 1.0,\n", " 1.125,\n", " 18,\n", " 16)" ] }, "execution_count": 215, "metadata": {}, "output_type": "execute_result" } ], "source": [ "compute_bleu([references], [candidate1])" ] }, { "cell_type": "code", "execution_count": 157, "metadata": { "scrolled": true }, "outputs": [ { "data": { "text/plain": [ "(0.0,\n", " [0.5714285714285714, 0.07692307692307693, 0.0, 0.0],\n", " 0.8668778997501817,\n", " 0.875,\n", " 14,\n", " 16)" ] }, "execution_count": 157, "metadata": {}, "output_type": "execute_result" } ], "source": [ "compute_bleu([references], [candidate2])" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "#### 分解步骤" ] }, { "cell_type": "code", "execution_count": 288, "metadata": { "collapsed": true }, "outputs": [], "source": [ "weights = [0.25, 0.25, 0.25, 0.25]\n", "candidate1 = ['the', 'commands', 'of', 'the', 'party']\n", "reference1 = ['will', 'forever','heed', 'Party', 'commands']\n", "reference2 = ['the', 'command', 'of', 'the','Party']\n", "reference3 = ['the', 'the','of', 'the', 'party']" ] }, { "cell_type": "code", "execution_count": 289, "metadata": { "collapsed": true }, "outputs": [], "source": [ "references = [reference1, reference2, reference3]" ] }, { "cell_type": "code", "execution_count": 290, "metadata": { "collapsed": true }, "outputs": [], "source": [ "reference_corpus = [references]" ] }, { "cell_type": "code", "execution_count": 291, "metadata": { "collapsed": true }, "outputs": [], "source": [ "translation_corpus = [candidate1]" ] }, { "cell_type": "code", "execution_count": 292, "metadata": { "collapsed": true }, "outputs": [], "source": [ "max_order = 1" ] }, { "cell_type": "code", "execution_count": 293, "metadata": { "collapsed": true }, "outputs": [], "source": [ "matches_by_order = [0] * max_order\n", "possible_matches_by_order = [0] * max_order\n", "reference_length = 0\n", "translation_length = 0" ] }, { "cell_type": "code", "execution_count": 294, "metadata": {}, "outputs": [], "source": [ "for (references, translation) in zip(reference_corpus, translation_corpus):\n", " reference_length = min(len(r) for r in references)\n", " translation_length = len(translation)\n", "\n", " merged_ref_ngram_counts = collections.Counter()\n", " for reference in references:\n", " merged_ref_ngram_counts |= get_ngrams(reference, max_order)\n", " translation_ngram_counts = get_ngrams(translation, max_order)\n", " overlap = translation_ngram_counts & merged_ref_ngram_counts\n", " for ngram in overlap:\n", " matches_by_order[len(ngram)-1] += overlap[ngram]\n", " for order in range(1, max_order+1):\n", " possible_matches = len(translation) - order + 1\n", " if possible_matches > 0:\n", " possible_matches_by_order[order-1] += possible_matches" ] }, { "cell_type": "code", "execution_count": 295, "metadata": { "scrolled": true }, "outputs": [ { "data": { "text/plain": [ "Counter({('Party',): 1,\n", " ('commands',): 1,\n", " ('forever',): 1,\n", " ('heed',): 1,\n", " ('will',): 1})" ] }, "execution_count": 295, "metadata": {}, "output_type": "execute_result" } ], "source": [ "get_ngrams(reference1, max_order)" ] }, { "cell_type": "code", "execution_count": 296, "metadata": {}, "outputs": [ { "data": { "text/plain": [ "Counter({('Party',): 1, ('command',): 1, ('of',): 1, ('the',): 2})" ] }, "execution_count": 296, "metadata": {}, "output_type": "execute_result" } ], "source": [ "get_ngrams(reference2, max_order)" ] }, { "cell_type": "code", "execution_count": 297, "metadata": {}, "outputs": [ { "data": { "text/plain": [ "Counter({('of',): 1, ('party',): 1, ('the',): 3})" ] }, "execution_count": 297, "metadata": {}, "output_type": "execute_result" } ], "source": [ "get_ngrams(reference3, max_order)" ] }, { "cell_type": "code", "execution_count": 298, "metadata": { "scrolled": true }, "outputs": [ { "data": { "text/plain": [ "Counter({('Party',): 1,\n", " ('command',): 1,\n", " ('commands',): 1,\n", " ('forever',): 1,\n", " ('heed',): 1,\n", " ('of',): 1,\n", " ('party',): 1,\n", " ('the',): 3,\n", " ('will',): 1})" ] }, "execution_count": 298, "metadata": {}, "output_type": "execute_result" } ], "source": [ "merged_ref_ngram_counts" ] }, { "cell_type": "code", "execution_count": 299, "metadata": {}, "outputs": [ { "data": { "text/plain": [ "Counter({('Party',): 1,\n", " ('command',): 1,\n", " ('commands',): 1,\n", " ('forever',): 1,\n", " ('heed',): 1,\n", " ('of',): 1,\n", " ('party',): 1,\n", " ('the',): 3,\n", " ('will',): 1})" ] }, "execution_count": 299, "metadata": {}, "output_type": "execute_result" } ], "source": [ "get_ngrams(reference3, max_order) | get_ngrams(reference2, max_order) | get_ngrams(reference1, max_order)" ] }, { "cell_type": "code", "execution_count": 300, "metadata": {}, "outputs": [ { "data": { "text/plain": [ "Counter({('commands',): 1, ('of',): 1, ('party',): 1, ('the',): 2})" ] }, "execution_count": 300, "metadata": {}, "output_type": "execute_result" } ], "source": [ "translation_ngram_counts" ] }, { "cell_type": "code", "execution_count": 301, "metadata": {}, "outputs": [ { "data": { "text/plain": [ "Counter({('commands',): 1, ('of',): 1, ('party',): 1, ('the',): 2})" ] }, "execution_count": 301, "metadata": {}, "output_type": "execute_result" } ], "source": [ "overlap" ] }, { "cell_type": "code", "execution_count": 302, "metadata": {}, "outputs": [ { "data": { "text/plain": [ "[5]" ] }, "execution_count": 302, "metadata": {}, "output_type": "execute_result" } ], "source": [ "matches_by_order" ] }, { "cell_type": "code", "execution_count": 303, "metadata": {}, "outputs": [ { "data": { "text/plain": [ "[5]" ] }, "execution_count": 303, "metadata": {}, "output_type": "execute_result" } ], "source": [ "possible_matches_by_order" ] }, { "cell_type": "code", "execution_count": 304, "metadata": {}, "outputs": [ { "data": { "text/plain": [ "5" ] }, "execution_count": 304, "metadata": {}, "output_type": "execute_result" } ], "source": [ "reference_length" ] }, { "cell_type": "code", "execution_count": 305, "metadata": {}, "outputs": [ { "data": { "text/plain": [ "5" ] }, "execution_count": 305, "metadata": {}, "output_type": "execute_result" } ], "source": [ "translation_length" ] }, { "cell_type": "markdown", "metadata": { "collapsed": true }, "source": [ "## 思考" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "### 评估思路\n", "\n", "方法本身比较简单,理念和公式都不复杂,但评估思路非常严谨,令人信服。 \n", "而这对我们很多算法应用时的评估都非常有借鉴意义。 \n", "\n", "具体分为三个步骤: \n", "\n", "- 算法评估\n", " - 随机测试数据\n", " - 在不同层次上的评估,以期算法能具有区分性\n", " - 对数据分组或换一组数据,每组数据取均值后,组组之间依然具有区分性,且期望与整体一致\n", " - 其他影响算法的变量(如文中参考译文的数量)\n", "- 人为评估\n", " - 从测试数据中随机选取一定数量\n", " - 对算法区分出的结果进行人为打分\n", " - 考虑不同类型的人对结果的影响,是否需要对人进行分类\n", " - 考察人为打分结果是否具有区分性,且与算法结果一致\n", "- 算法与人评估的相关性\n", " - 把算法评估的结果与人为评估的结果线性拟合看相关性\n", " - 每种评估方式在不同层次数据集上归一化的结果,主要看每种评估方式的相似性和不同层次数据集上的表现,相对指标" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "### 代码\n", "\n", "nltk 的代码简单易懂,也很直观; \n", "Google 的代码抽象性很好,考虑了平滑和多样本处理。" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "## 参考" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "- 这篇文章写得通俗易懂:[机器翻译评价指标之BLEU - - CSDN博客](http://blog.csdn.net/guolindonggld/article/details/56966200)" ] } ], "metadata": { "kernelspec": { "display_name": "Python 3", "language": "python", "name": "python3" }, "language_info": { "codemirror_mode": { "name": "ipython", "version": 3 }, "file_extension": ".py", "mimetype": "text/x-python", "name": "python", "nbconvert_exporter": "python", "pygments_lexer": "ipython3", "version": "3.8.10" }, "latex_envs": { "LaTeX_envs_menu_present": true, "autoclose": false, "autocomplete": true, "bibliofile": "biblio.bib", "cite_by": "apalike", "current_citInitial": 1, "eqLabelWithNumbers": true, "eqNumInitial": 1, "hotkeys": { "equation": "Ctrl-E", "itemize": "Ctrl-I" }, "labels_anchors": false, "latex_user_defs": false, "report_style_numbering": false, "user_envs_cfg": false }, "toc": { "base_numbering": 1, "nav_menu": {}, "number_sections": true, "sideBar": true, "skip_h1_title": false, "title_cell": "Table of Contents", "title_sidebar": "Contents", "toc_cell": false, "toc_position": {}, "toc_section_display": true, "toc_window_display": false } }, "nbformat": 4, "nbformat_minor": 2 }