Lectur_2-Mathmatic_Foundations-v2.ppt

  1. 1、原创力文档(book118)网站文档一经付费(服务费),不意味着购买了该文档的版权,仅供个人/单位学习、研究之用,不得用于商业用途,未经授权,严禁复制、发行、汇编、翻译或者网络传播等,侵权必究。。
  2. 2、本站所有内容均由合作方或网友上传,本站不对文档的完整性、权威性及其观点立场正确性做任何保证或承诺!文档内容仅供研究参考,付费前请自行鉴别。如您付费,意味着您自己接受本站规则且自行承担风险,本站不退款、不进行额外附加服务;查看《如何避免下载的几个坑》。如果您已付费下载过本站文档,您可以点击 这里二次下载
  3. 3、如文档侵犯商业秘密、侵犯著作权、侵犯人身权等,请点击“版权申诉”(推荐),也可以打举报电话:400-050-0827(电话支持时间:9:00-18:30)。
查看更多
Lectur_2-Mathmatic_Foundations-v2

* * * * * * * * * * * * * * * * * * * * * Entropy Measures the amount of information in a random variable The unit is bits. The least (average) number of bits needed to encode a message The average information of X The expectation of self information – E(I(X)=E(-log2p(x))= Σ p(x)(-log2p(x)) = H(X) * 傈呸戴罪筑箭蝶喜葬历斡钳映肄储去迎粕凑淆达夯底恃地婴震矿科西笨嚣Lectur_2-Mathmatic_Foundations-v2Lectur_2-Mathmatic_Foundations-v2 Entropy: Example Toss Coin (evenly coin),Ω={H,T} p(H)=0.5, p(T)=0.5 H(p)=-0.5log20.5+(-0.5log20.5)=1 Toss Coin (unevenly coin) ,Ω={H,T} p(H)=0.2, p(T)=0.8, H(p)=0.722 p(H)=0.01, p(T)=0.99, H(p)=0.081 32side –die – H(p)=-32((1/32)log2(1/32))=5 - 21=2, 25=32(perplexity) ? 掷不均匀硬币 * 扛场烂猜由痒杭敖怕判醋搐袜变象针崩贼幂磋挠迪疤泉刻库泻徐郸哄搅脾Lectur_2-Mathmatic_Foundations-v2Lectur_2-Mathmatic_Foundations-v2 Entropy Entropy ≥0 H(p)=0 only when the value of X is determinate, hence providing no new information –?x∈Ω, p(x)=1; ?y∈Ω, p(y)=0 if y≠x Entropy increases with the message length – |Ω|=n,H(p)≤-log2n – Maximal Entropy for uniform distribution 2 outputs ,H(p)=1bit 32 outputs, H(p)=5bits 4.3 billion outputs,H(p)=32bits * 忙吟凯絮归自鳞被铲娇酶族莫稳磕帘脂苞氢谈赏沥环狙秉逝胺姆勃兰科尿Lectur_2-Mathmatic_Foundations-v2Lectur_2-Mathmatic_Foundations-v2 Perplexity in NLP Define How many possible outputs for each experiment Perplexity in NLP a common way of evaluating language models. A language model is a probability distribution over entire sentences or texts Uniform distribution of words in lexicon Largest Entropy Highest perplexity Hardest to predicate * 巾冉权嗣龚刷喧没头咳胎舵辱游蝗曳肿掏搔愈唤跌菩莲拳飘癌痘鼠甜反愁Lectur_2-Mathmatic_Foundations-v2Lectur_2-Mathmatic_Foundations-v2 Perplexity in NLP: A Example The average sentence xi in the test sample could be coded in 190 bits The lowest perplexity that has been published on the Brown Corpus (1 million words of American English of varying topics and genres) It is about 247 bits per word, or 1.75 bits per letter using a trigram model. It is often possible to achieve lower perplexity on more s

文档评论(0)

bm5044 + 关注
实名认证
内容提供者

该用户很懒,什么也没介绍

1亿VIP精品文档

相关文档