- 1、原创力文档(book118)网站文档一经付费(服务费),不意味着购买了该文档的版权,仅供个人/单位学习、研究之用,不得用于商业用途,未经授权,严禁复制、发行、汇编、翻译或者网络传播等,侵权必究。。
- 2、本站所有内容均由合作方或网友上传,本站不对文档的完整性、权威性及其观点立场正确性做任何保证或承诺!文档内容仅供研究参考,付费前请自行鉴别。如您付费,意味着您自己接受本站规则且自行承担风险,本站不退款、不进行额外附加服务;查看《如何避免下载的几个坑》。如果您已付费下载过本站文档,您可以点击 这里二次下载。
- 3、如文档侵犯商业秘密、侵犯著作权、侵犯人身权等,请点击“版权申诉”(推荐),也可以打举报电话:400-050-0827(电话支持时间:9:00-18:30)。
查看更多
分类_EnsembleClassifiers_QiangYang分析
* * * * * * * * * * * * * * * * * * Ensemble Learning: An Introduction Adapted from Slides by Tan, Steinbach, Kumar * General Idea * Why does it work? Suppose there are 25 base classifiers Each classifier has error rate, ? = 0.35 Assume classifiers are independent Probability that the ensemble classifier makes a wrong prediction: * Examples of Ensemble Methods How to generate an ensemble of classifiers? Bagging Boosting * Bagging Sampling with replacement Build classifier on each bootstrap sample Each sample has probability (1 – 1/n)n of being selected as test data Training data = 1- (1 – 1/n)n of the original data Training Data Data ID * * The 0.632 bootstrap This method is also called the 0.632 bootstrap A particular training data has a probability of 1-1/n of not being picked Thus its probability of ending up in the test data (not selected) is: This means the training data will contain approximately 63.2% of the instances * Example of Bagging 0.3 0.8 x +1 +1 -1 Assume that the training data is: 0.4 to 0.7: Goal: find a collection of 10 simple thresholding classifiers that collectively can classify correctly. Each simple (or weak) classifier is: (x=K ? class = +1 or -1 depending on which value yields the lowest error; where K is determined by entropy minimization) * * Bagging (applied to training data) Accuracy of ensemble classifier: 100% ? * Bagging- Summary Works well if the base classifiers are unstable (complement each other) Increased accuracy because it reduces the variance of the individual classifier Does not focus on any particular instance of the training data Therefore, less susceptible to model over-fitting when applied to noisy data What if we want to focus on a particular instances of training data? * In general, - Bias is contributed to by the training error; a complex model has low bias. Variance is caused by future error; a complex model has High variance. - Bagging reduces the variance in the base classifiers. * * * Boosting An it
文档评论(0)