深度学习稀疏编码教程.ppt

  1. 1、原创力文档(book118)网站文档一经付费(服务费),不意味着购买了该文档的版权,仅供个人/单位学习、研究之用,不得用于商业用途,未经授权,严禁复制、发行、汇编、翻译或者网络传播等,侵权必究。。
  2. 2、本站所有内容均由合作方或网友上传,本站不对文档的完整性、权威性及其观点立场正确性做任何保证或承诺!文档内容仅供研究参考,付费前请自行鉴别。如您付费,意味着您自己接受本站规则且自行承担风险,本站不退款、不进行额外附加服务;查看《如何避免下载的几个坑》。如果您已付费下载过本站文档,您可以点击 这里二次下载
  3. 3、如文档侵犯商业秘密、侵犯著作权、侵犯人身权等,请点击“版权申诉”(推荐),也可以打举报电话:400-050-0827(电话支持时间:9:00-18:30)。
查看更多
深度学习稀疏编码教程

MNIST results -- learned dictionary * A hidden unit in the second layer is connected to a unit group in the 1st layer: invariance to translation, rotation, and deformation Yu, Lin, Lafferty, CVPR 11 Caltech101 results - classification ? Learned descriptor: performs slightly better than SIFT + SC * Yu, Lin, Lafferty, CVPR 11 Adaptive Deconvolutional Networks for Mid and High Level Feature Learning Hierarchical Convolutional Sparse Coding. Trained with respect to image from all layers (L1-L4). Pooling both spatially and amongst features. Learns invariant mid-level features. Matthew D. Zeiler, Graham W. Taylor, and Rob Fergus, ICCV 2011 Select L2 Feature Groups Select L3 Feature Groups Select L4 Features L1 Feature Maps Image L2 Feature Maps L4 Feature Maps L1 Features L3 Feature Maps Outline Sparse coding for image classification Understanding sparse coding Hierarchical sparse coding Other topics: e.g. structured model, scale-up, discriminative training Summary * * Other topics of sparse coding Structured sparse coding, for example Group sparse coding [Bengio et al, NIPS 09] Learning hierarchical dictionary [Jenatton, Mairal et al, 2010] Scale-up sparse coding, for example Feature-sign algorithm [Lee et al, NIPS 07] Feed-forward approximation [Gregor LeCun, ICML 10] Online dictionary learning [Mairal et al, ICML 2009] Discriminative training, for example Backprop algorithms [Bradley Jbagnell, NIPS 08; Yang et al. CVPR 10] Supervised dictionary training [Mairal et al, NIPS08] * * Summary of Sparse Coding Sparse coding is an effect way for (unsupervised) feature learning A building block for deep models Sparse coding and its local variants (LCC, SVC) have pushed the boundary of accuracies on Caltech101, PASCAL VOC, ImageNet, … Challenge: discriminative training is not straightforward * * * * * * Let’s further check what’s happening when best classification performance is achieved. * * * * * Hi I’m here to talk about my poster “Adaptive Deconvolution

文档评论(0)

wyjy + 关注
实名认证
内容提供者

该用户很懒,什么也没介绍

1亿VIP精品文档

相关文档