Training Hierarchical Feed-forward Visual Recognition Models 培训层次前馈视觉识别模型.pptVIP

Training Hierarchical Feed-forward Visual Recognition Models 培训层次前馈视觉识别模型.ppt

  1. 1、原创力文档(book118)网站文档一经付费(服务费),不意味着购买了该文档的版权,仅供个人/单位学习、研究之用,不得用于商业用途,未经授权,严禁复制、发行、汇编、翻译或者网络传播等,侵权必究。。
  2. 2、本站所有内容均由合作方或网友上传,本站不对文档的完整性、权威性及其观点立场正确性做任何保证或承诺!文档内容仅供研究参考,付费前请自行鉴别。如您付费,意味着您自己接受本站规则且自行承担风险,本站不退款、不进行额外附加服务;查看《如何避免下载的几个坑》。如果您已付费下载过本站文档,您可以点击 这里二次下载
  3. 3、如文档侵犯商业秘密、侵犯著作权、侵犯人身权等,请点击“版权申诉”(推荐),也可以打举报电话:400-050-0827(电话支持时间:9:00-18:30)。
  4. 4、该文档为VIP文档,如果想要下载,成为VIP会员后,下载免费。
  5. 5、成为VIP后,下载本文档将扣除1次下载权益。下载后,不支持退款、换文档。如有疑问请联系我们
  6. 6、成为VIP后,您将拥有八大权益,权益包括:VIP文档下载权益、阅读免打扰、文档格式转换、高级专利检索、专属身份标志、高级客服、多端互通、版权登记。
  7. 7、VIP文档为合作方或网友上传,每下载1次, 网站将根据用户上传文档的质量评分、类型等,对文档贡献者给予高额补贴、流量扶持。如果你也想贡献VIP文档。上传文档
查看更多
Training Hierarchical Feed-forward Visual Recognition Models 培训层次前馈视觉识别模型

Overview of Back Propagation Algorithm Shuiwang Ji A Sample Network Forward Operation The general feed-forward operation is: Back Propagation Algorithm The hidden to output weights can be learned by minimizing the error The power of back-propagation is that it allows us to calculate an effective error for each hidden unit, and thus derive a learning rule for the input-to-hidden weights We consider the error function: The update rule is: Hidden-to-output Weights Input-to-hidden Weights Back Propagation of Sensitivity Training Hierarchical Feed-forward Visual Recognition Models Using Transfer Learning from Pseudo-Tasks ECCV’08 Kai Yu Presented by Shuiwang Ji Transfer Learning Transfer learning, also known as multi-task learning, is a mechanism that improves generalization by leveraging shared domain-specific information contained in related tasks In the setting considered in this paper, all tasks share the same input space General Formulation The main task to be learnt has index m with training examples A neural network has a natural architecture to tackle this learning problem by minimizing: General Formulation The is learned by additionally introducing pseudo auxiliary tasks, each represented by learning the input-output pairs: Then the regularization term becomes A Bayesian perspective (skipped) CNN for Transfer Learning Generating Pseudo Tasks Generating Pseudo Tasks Applying Gabor filters of 4 orientations and 16 scales result in 64 feature maps of size 104*104 for each image Max-pooling operation is performed first within each non-overlapping 4*4 neighborhood and then within each band of two successive scales resulting in 32 feature maps of size 26*26 for each image An set of K RBF filter of size 7*7 with 4 orientations are then sampled and used as the parameters of the pseudo-tasks, resulting in 8 feature maps of size 20*20 Finally, max pooling is performed on the result across all the scales and within every non-ov

您可能关注的文档

文档评论(0)

138****7331 + 关注
实名认证
文档贡献者

该用户很懒,什么也没介绍

1亿VIP精品文档

相关文档