- 1、本文档共21页,可阅读全部内容。
- 2、原创力文档(book118)网站文档一经付费(服务费),不意味着购买了该文档的版权,仅供个人/单位学习、研究之用,不得用于商业用途,未经授权,严禁复制、发行、汇编、翻译或者网络传播等,侵权必究。
- 3、本站所有内容均由合作方或网友上传,本站不对文档的完整性、权威性及其观点立场正确性做任何保证或承诺!文档内容仅供研究参考,付费前请自行鉴别。如您付费,意味着您自己接受本站规则且自行承担风险,本站不退款、不进行额外附加服务;查看《如何避免下载的几个坑》。如果您已付费下载过本站文档,您可以点击 这里二次下载。
- 4、如文档侵犯商业秘密、侵犯著作权、侵犯人身权等,请点击“版权申诉”(推荐),也可以打举报电话:400-050-0827(电话支持时间:9:00-18:30)。
查看更多
Pattern Classification Chapter 2 (Part 2) Chapter 2 (Part 2): Bayesian Decision Theory(Sections 2.3-2.5) Minimum-Error-Rate Classification Classifiers, Discriminant Functions and Decision Surfaces The Normal Density 2.3 Minimum-Error-Rate Classification Actions are decisions on classes If action ?i is taken and the true state of nature is ?j then: the decision is correct if i = j and in error if i ? j Seek a decision rule that minimizes the probability of error which is the error rate Introduction of the zero-one loss function: Therefore, the conditional risk is: “The risk corresponding to this loss function is the average probability error” ? Minimize the risk requires maximize P(?i | x) (since R(?i | x) = 1 – P(?i | x)) For Minimum error rate Decide ?i if P (?i | x) P(?j | x) ?j ? i Regions of decision and zero-one loss function, therefore: If ? is the zero-one loss function which means: 2.4 Classifiers, Discriminant Functionsand Decision Surfaces The multi-category case Set of discriminant functions gi(x), i = 1,…, c The classifier assigns a feature vector x to class ?i if: gi(x) gj(x) ?j ? i For the minimum risk case Let gi(x) = - R(?i | x) (max. discriminant corresponds to min. risk!) For the minimum error rate cae, we take gi(x) = P(?i | x) (max. discrimination corresponds to max. posterior!) gi(x) ? p(x | ?i) P(?i) gi(x) = ln p(x | ?i) + ln P(?i) (ln: natural logarithm!) Feature space divided into c decision regions if gi(x) gj(x) ?j ? i then x is in Ri (Ri means assign x to ?i) The two-category case A classifier is a “dichotomizer” that has two discriminant functions g1 and g2 Let g(x) ? g1(x) – g2(x) Decide ?1 if g(x) 0 ; Otherwise decide ?2 The computation of g(x) 2.5 The Normal Density Univariate density Continuous density A lot of processes are asymptotically Gaussian Handwritten characters, speech sounds are ideal or prototype corrupted by random process (central limit theorem) Where:
文档评论(0)