- 1、本文档共79页,可阅读全部内容。
- 2、原创力文档(book118)网站文档一经付费(服务费),不意味着购买了该文档的版权,仅供个人/单位学习、研究之用,不得用于商业用途,未经授权,严禁复制、发行、汇编、翻译或者网络传播等,侵权必究。
- 3、本站所有内容均由合作方或网友上传,本站不对文档的完整性、权威性及其观点立场正确性做任何保证或承诺!文档内容仅供研究参考,付费前请自行鉴别。如您付费,意味着您自己接受本站规则且自行承担风险,本站不退款、不进行额外附加服务;查看《如何避免下载的几个坑》。如果您已付费下载过本站文档,您可以点击 这里二次下载。
- 4、如文档侵犯商业秘密、侵犯著作权、侵犯人身权等,请点击“版权申诉”(推荐),也可以打举报电话:400-050-0827(电话支持时间:9:00-18:30)。
- 5、该文档为VIP文档,如果想要下载,成为VIP会员后,下载免费。
- 6、成为VIP后,下载本文档将扣除1次下载权益。下载后,不支持退款、换文档。如有疑问请联系我们。
- 7、成为VIP后,您将拥有八大权益,权益包括:VIP文档下载权益、阅读免打扰、文档格式转换、高级专利检索、专属身份标志、高级客服、多端互通、版权登记。
- 8、VIP文档为合作方或网友上传,每下载1次, 网站将根据用户上传文档的质量评分、类型等,对文档贡献者给予高额补贴、流量扶持。如果你也想贡献VIP文档。上传文档
查看更多
数据挖掘数据预处理 Data Preprocessing课件
Data Mining: Concepts and Techniques — Chapter 2 —;Chapter 2: Data Preprocessing;Why Data Preprocessing?;Why Data Preprocessing?;Why Is Data Dirty?;Why Is Data Preprocessing Important?;Major Tasks in Data Preprocessing;Forms of Data Preprocessing ;Chapter 2: Data Preprocessing;Mining Data Descriptive Characteristics;Measuring the Central Tendency;Measuring the Central Tendency; Symmetric vs. Skewed Data;Measuring the Dispersion of Data; Boxplot Analysis;Measuring the Dispersion of Data (cont.);Properties of Normal Distribution Curve;Visualization of Data Dispersion: Boxplot Analysis;Histogram Analysis;Quantile Plot;Quantile-Quantile (Q-Q) Plot;Scatter plot;Loess Curve;Positively and Negatively Correlated Data; Not Correlated Data;Chapter 2: Data Preprocessing;Data Cleaning;How to Handle Missing Data?;How to Handle Noisy Data?;Simple Discretization Methods: Binning;Binning Methods for Data Smoothing;Regression;Cluster Analysis;Chapter 2: Data Preprocessing;Data Integration;Data Integration (cont.);Handling Redundancy in Data Integration;Correlation Analysis (Numerical Data);Correlation Analysis (Categorical Data);Chi-Square Calculation: An Example;Data Transformation;Data Transformation: Normalization;Data Transformation: Normalization;Chapter 2: Data Preprocessing;Data Reduction Strategies;Data Reduction Strategies;Attribute Subset Selection;Heuristic Feature Selection Methods;Example of Decision Tree Induction;Data Compression;Dimensionality Reduction:Wavelet Transformation ;DWT for Image Compression;Given N data vectors from n-dimensions, find k ≤ n orthogonal vectors (principal components) that can be best used to represent data
Steps
Normalize input data: Each attribute falls within the same range
Compute k orthonormal (unit) vectors, i.e., principal components
Each input data (vector) is a linear combination of the k principal component vectors
The principal components are sorted in order of decreasing “significance” or strength
Since the components are
文档评论(0)