Report on Semisupervised Training for Statistical 半监督培训统计报表.pptVIP

Report on Semisupervised Training for Statistical 半监督培训统计报表.ppt

  1. 1、原创力文档(book118)网站文档一经付费(服务费),不意味着购买了该文档的版权,仅供个人/单位学习、研究之用,不得用于商业用途,未经授权,严禁复制、发行、汇编、翻译或者网络传播等,侵权必究。。
  2. 2、本站所有内容均由合作方或网友上传,本站不对文档的完整性、权威性及其观点立场正确性做任何保证或承诺!文档内容仅供研究参考,付费前请自行鉴别。如您付费,意味着您自己接受本站规则且自行承担风险,本站不退款、不进行额外附加服务;查看《如何避免下载的几个坑》。如果您已付费下载过本站文档,您可以点击 这里二次下载
  3. 3、如文档侵犯商业秘密、侵犯著作权、侵犯人身权等,请点击“版权申诉”(推荐),也可以打举报电话:400-050-0827(电话支持时间:9:00-18:30)。
  4. 4、该文档为VIP文档,如果想要下载,成为VIP会员后,下载免费。
  5. 5、成为VIP后,下载本文档将扣除1次下载权益。下载后,不支持退款、换文档。如有疑问请联系我们
  6. 6、成为VIP后,您将拥有八大权益,权益包括:VIP文档下载权益、阅读免打扰、文档格式转换、高级专利检索、专属身份标志、高级客服、多端互通、版权登记。
  7. 7、VIP文档为合作方或网友上传,每下载1次, 网站将根据用户上传文档的质量评分、类型等,对文档贡献者给予高额补贴、流量扶持。如果你也想贡献VIP文档。上传文档
查看更多
Report on Semisupervised Training for Statistical 半监督培训统计报表

Report on Semi-supervised Training for Statistical Parsing Zhang Hao 2002-12-18 Brief Introduction Why semi-supervised training? Co-training framework and applications Can parsing fit in this framework? How? Conclusion Why Semi-supervised Training Compromise between su… and unsu… Pay-offs: Minimize the need for labeled data Maximize the value of unlabeled data Easy portability Co-training Scenario Idea: two different students learn from each other, incrementally, mutually improving “二人行必有我师” difference(motive) –mutual learning(optimize)- agreement(objective). Task: to optimize the objective function of agreement. Heuristic selection is important: what to learn? [Blum Mitchell, 98] Co-training Assumptions Classification problem Feature redundancy Allows different views of data Each view is sufficient for classification View independency of features, given class [Blum Mitchell, 98] Co-training example “Course home page” classification (y/n) Two views: content text/anchor text (more perfect example: two sides of a coin) Two na?ve Bayes classifiers: should agree [Blum Mitchell, 98] Co-Training Algorithm Given: A set L of labeled training examples A set U of unlabeled examples Create a pool U’ of examples by choosing u examples at random from U. Loop for k iterations: Use L to train a classifier h1 that considers only the x1 portion of x Use L to train a classifier h2 that considers only the x2 portion of x Allow h1 to label p positive and n negative examples from U’ Allow h2 to label p positive and n negative examples from U’ Add these self-labeled examples to L Randomly choose 2p+2n examples from U to replenish U’ Family of Algorithms Related to Co-training Parsing As Supertagging and Attaching [Sarkar 2001] The difference between parsing and other NLP applications:WSD, WBPC, TC, NEI A tree vs. A label Composite vs. Monolithic Large parameter space vs. Small … LTAG Each word is tagged with a lexicalized elementary tree (supertagging) Parsing is a process of

您可能关注的文档

文档评论(0)

118books + 关注
实名认证
文档贡献者

该用户很懒,什么也没介绍

1亿VIP精品文档

相关文档