Principal Component Analysis Based on L1Norm Maximization基于L1范数最大的主成分分析.pptVIP

Principal Component Analysis Based on L1Norm Maximization基于L1范数最大的主成分分析.ppt

  1. 1、原创力文档(book118)网站文档一经付费(服务费),不意味着购买了该文档的版权,仅供个人/单位学习、研究之用,不得用于商业用途,未经授权,严禁复制、发行、汇编、翻译或者网络传播等,侵权必究。。
  2. 2、本站所有内容均由合作方或网友上传,本站不对文档的完整性、权威性及其观点立场正确性做任何保证或承诺!文档内容仅供研究参考,付费前请自行鉴别。如您付费,意味着您自己接受本站规则且自行承担风险,本站不退款、不进行额外附加服务;查看《如何避免下载的几个坑》。如果您已付费下载过本站文档,您可以点击 这里二次下载
  3. 3、如文档侵犯商业秘密、侵犯著作权、侵犯人身权等,请点击“版权申诉”(推荐),也可以打举报电话:400-050-0827(电话支持时间:9:00-18:30)。
  4. 4、该文档为VIP文档,如果想要下载,成为VIP会员后,下载免费。
  5. 5、成为VIP后,下载本文档将扣除1次下载权益。下载后,不支持退款、换文档。如有疑问请联系我们
  6. 6、成为VIP后,您将拥有八大权益,权益包括:VIP文档下载权益、阅读免打扰、文档格式转换、高级专利检索、专属身份标志、高级客服、多端互通、版权登记。
  7. 7、VIP文档为合作方或网友上传,每下载1次, 网站将根据用户上传文档的质量评分、类型等,对文档贡献者给予高额补贴、流量扶持。如果你也想贡献VIP文档。上传文档
查看更多
Principal Component Analysis Based on L1Norm Maximization基于L1范数最大的主成分分析

Proof The projection vector is a linear combination of samples . It’s in the subspace spanned by . Then, we consider : * Form Greedy search algorithm. normal vector, (=1) Proof Because , is orthogonal to .. is orthogonal to . * The orthonormality of the projection vectors is guaranteed. Algorithms Even if the greedy search algorithm does not provide the optimal solution, it provides a set of good projections that maximize L1 dispersion. * Algorithms For data analysis, we could decide how much data could be captured. In PCA, we could compute the eigenvalue: * The eigenvalue is equivalent to the variance of the feature. We can compute the ratio of the variance to the total variance. The sum of variance: In eigenvalue, it exceeds 95% of the total variance, m is set to . Algorithms In PCA-L1, once is obtained, we can compute the variance of the feature. The sum of variance: The total variance: * We can set the appropriate number of extracted features like original PCA. Experiments In the experiments, we apply PCA-L1 algorithm and compared with R1-PCA and original PCA. Three experiments: A Toy problem with an Outlier UCI Data Sets Face Reconstruction * A Toy Problem with an Outlier Consider the data points in a 2D space: If we discard the outlier, the projection vector should be . * an outlier. A Toy Problem with an Outlier The projection vector: * outlier A Toy Problem with an Outlier The residual error : * outlier Average residual error PCA-L1 L2-PCA R1-PCA 1.200 1.401 1.206 Much influenced by the outlier. UCI Data Sets Data sets in UCI machine learning repositories. Compare the classification performances. 1-NN classifier was used and 10-fold cross validation for average classification rate. For PCA-L1, we set the initial projection vector as

文档评论(0)

118books + 关注
实名认证
文档贡献者

该用户很懒,什么也没介绍

1亿VIP精品文档

相关文档