模式识别讲稿2010-lecture5.pptVIP

  • 0
  • 0
  • 约7.55千字
  • 约 31页
  • 2018-05-19 发布于四川
  • 举报
Fisher’s Linear Discriminates The classification problem become unmanageable in a feature space 50 or 100 dimension. The purpose of Fisher’s linear discriminant is to reduce the dimensionality of the problem from d to 1 Feature Extraction Methods The KL Transformation or Expansion Feature Selection Methods Basic Compute of Class Separability Error Probability Let stand for the feature space Inter-class Distance Given observed feature vectors Search Algorithm for Feature Selection A Branch and Bound Algorithm for Optimal Search * will be singular if In order to obtain a good estimate, n should be several times of d, 5 times at least. “There is no data like more data”. In the parametric method for supervised classification All mathematical feature space reduction techniques can be classified into one of 2 major categories: feature selection in the original measurement space of features. “extraction” or “transformed space” Discriminatory information compression can be achieved by approximating the feature vectors, each representing a pattern ,by a number of terms of the Karhunen-Loeve(KL) expansion . This number is not limited to the number of classes of patterns c-1 but it can be chosen to minimizes certain objective function. Consider a large set of D-dim pattern vectors . It’s always possible to expand each into an arbitrary but complete system of ortho-normal deterministic vectors , j=1,…, ∞ where without incurring any information lose, as: xj is the expansion coefficient associated with the basis vector is an approximation of obtained by taking a finite No., d of terms in the expansion. i.e. We want to choose the basis vectors in such a way that the mean square error is minimized. Since is just and Let us denote by , then We want to minimize ,considered a function of , subject to the orthonormal constraints

文档评论(0)

1亿VIP精品文档

相关文档