- 1、本文档共26页,可阅读全部内容。
- 2、原创力文档(book118)网站文档一经付费(服务费),不意味着购买了该文档的版权,仅供个人/单位学习、研究之用,不得用于商业用途,未经授权,严禁复制、发行、汇编、翻译或者网络传播等,侵权必究。
- 3、本站所有内容均由合作方或网友上传,本站不对文档的完整性、权威性及其观点立场正确性做任何保证或承诺!文档内容仅供研究参考,付费前请自行鉴别。如您付费,意味着您自己接受本站规则且自行承担风险,本站不退款、不进行额外附加服务;查看《如何避免下载的几个坑》。如果您已付费下载过本站文档,您可以点击 这里二次下载。
- 4、如文档侵犯商业秘密、侵犯著作权、侵犯人身权等,请点击“版权申诉”(推荐),也可以打举报电话:400-050-0827(电话支持时间:9:00-18:30)。
查看更多
From燣anguage燤odeling炉o燣anguage
From Language Modeling to Language Generation
Lili Mou
doublepower.mou@
Outline
● Language Modeling
● Neural Language Models
● Sequence to Sequence Generation
● Backward and Forward Language Modeling
● Discussion
Language Modeling
● Given a corpus
● The goal is to maximize
Language Modeling
● Given a corpus
● The goal is to maximize
● Can we decompose any probabilistic distribution into this
form? Yes.
● Is it necessary to decompose a probabilistic distribution into
this form? No.
Markov Assumption
● A word is dependent only on its previous n1 words and
independent of its position,
I.e., provided the previous n1 words, the current word is
independent of other random variables.
Multinomial Estimate
● Maximum likelihood estimation for a multinomial
distribution is merely counting.
● Problem: #para grows exp. w.r.t. n
Outline
● Language Modeling
● Neural Language Models
● Sequence to Sequence Generation
● Backward and Forward Language Modeling
● Discussion
Parametrizing LMs with Neural Networks
● Each word is mapped to a realvalued vector, called
embeddings.
● Neural layers capture context information (typically
previous words).
● The probability p(w|.) is predicted by a softmax layer.
FeedForward Language Model
The Markov assumption
also holds.
Recurrent Neural Language Model
● RNN keeps one or a few hidden states
● The hidden states change at each time step according to
the input
● RNN directly parametrizes
rather than
Language Generation from LMs
● Max a posteriori: choose the most likely words
– Greedy
文档评论(0)