Ensemble Methods Bagging and Boosting.ppt

  1. 1、原创力文档(book118)网站文档一经付费(服务费),不意味着购买了该文档的版权,仅供个人/单位学习、研究之用,不得用于商业用途,未经授权,严禁复制、发行、汇编、翻译或者网络传播等,侵权必究。。
  2. 2、本站所有内容均由合作方或网友上传,本站不对文档的完整性、权威性及其观点立场正确性做任何保证或承诺!文档内容仅供研究参考,付费前请自行鉴别。如您付费,意味着您自己接受本站规则且自行承担风险,本站不退款、不进行额外附加服务;查看《如何避免下载的几个坑》。如果您已付费下载过本站文档,您可以点击 这里二次下载
  3. 3、如文档侵犯商业秘密、侵犯著作权、侵犯人身权等,请点击“版权申诉”(推荐),也可以打举报电话:400-050-0827(电话支持时间:9:00-18:30)。
查看更多
Ensemble Methods Bagging and Boosting.ppt

Ensemble Methods: Bagging and Boosting Lucila Ohno-Machado HST951 Topics Combining classifiers: Ensembles (vs. mixture of experts) Bagging Boosting Ada-Boosting Arcing Stacked Generalization Combining classifiers Examples: classification trees and neural networks, several neural networks, several classification trees, etc. Average results from different models Why? Better classification performance than individual classifiers More resilience to noise Why not? Time consuming Overfitting Bagging Breiman, 1996 Derived from bootstrap (Efron, 1993) Create classifiers using training sets that are bootstrapped (drawn with replacement) Average results for each case Bagging Example (Opitz, 1999) Boosting A family of methods Sequential production of classifiers Each classifier is dependent on the previous one, and focuses on the previous one’s errors Examples that are incorrectly predicted in previous classifiers are chosen more often or weighted more heavily Ada-Boosting Freund and Schapire, 1996 Two approaches Select examples according to error in previous classifier (more representatives of misclassified cases are selected) – more common Weigh errors of the misclassified cases higher (all cases are incorporated, but weights are different) – does not work for some algorithms Boosting Example (Opitz, 1999) Ada-Boosting Define ek as the sum of the probabilities for the misclassified instances for current classifier Ck Multiply probability of selecting misclassified cases by bk = (1 – ek)/ ek “Renormalize” probabilities (i.e., rescale so that it sums to 1) Combine classifiers C1…Ck using weighted voting where Ck has weight log(bk) Arcing Arcing-x4 (Breiman, 1996) For the ith example in the training set, mi refers to the number of times that it was misclassified by the previous K classifiers Probability pi of selecting example i in the next classifier K-1 is Empirical determination Bias plus variance decomposition Geman, 1992 Bias: how close the average cla

文档评论(0)

gshshxx + 关注
实名认证
内容提供者

该用户很懒,什么也没介绍

1亿VIP精品文档

相关文档