AAI-Lecture7-Machine-learning(机器学习).pptVIP

  1. 1、原创力文档(book118)网站文档一经付费(服务费),不意味着购买了该文档的版权,仅供个人/单位学习、研究之用,不得用于商业用途,未经授权,严禁复制、发行、汇编、翻译或者网络传播等,侵权必究。。
  2. 2、本站所有内容均由合作方或网友上传,本站不对文档的完整性、权威性及其观点立场正确性做任何保证或承诺!文档内容仅供研究参考,付费前请自行鉴别。如您付费,意味着您自己接受本站规则且自行承担风险,本站不退款、不进行额外附加服务;查看《如何避免下载的几个坑》。如果您已付费下载过本站文档,您可以点击 这里二次下载
  3. 3、如文档侵犯商业秘密、侵犯著作权、侵犯人身权等,请点击“版权申诉”(推荐),也可以打举报电话:400-050-0827(电话支持时间:9:00-18:30)。
  4. 4、该文档为VIP文档,如果想要下载,成为VIP会员后,下载免费。
  5. 5、成为VIP后,下载本文档将扣除1次下载权益。下载后,不支持退款、换文档。如有疑问请联系我们
  6. 6、成为VIP后,您将拥有八大权益,权益包括:VIP文档下载权益、阅读免打扰、文档格式转换、高级专利检索、专属身份标志、高级客服、多端互通、版权登记。
  7. 7、VIP文档为合作方或网友上传,每下载1次, 网站将根据用户上传文档的质量评分、类型等,对文档贡献者给予高额补贴、流量扶持。如果你也想贡献VIP文档。上传文档
查看更多
Regression: Minimizing Loss y = w1 x + w0 Linear algebra gives an exact solution to the minimization problem Linear Algebra Solution Linear Regression X: 3, 6, 4, 5 Y: 0, -3, -1, -2 f(x)=w1x+w0 w1=-1, w0 =3 Minimizing quadratic loss Recaculate w0,w1 Another quiz: X(2,4,6,8), Y(2,5,5,8) Don’t Always Trust Linear Models Regression by Gradient Descent w = any point loop until convergence do: for each wi in w do: wi = wi – α ? Loss(w) ? wi Multivariate Regression You learned this in math class too hw(x) = w ? x = w xT = Σi wi xi The most probable set of weights, w* (minimizing squared error): w* = (XT X)-1 XT y Overfitting To avoid overfitting, don’t just minimize loss Maximize probability, including prior over w Can be stated as minimization: Cost(h) = EmpiricalLoss(h) + λ Complexity(h) For linear models, consider Complexity(hw) = Lq(w) = ∑i | wi |q L1 regularization minimizes sum of abs. values L2 regularization minimizes sum of squares Regularization and Sparsity L1 regularization L2 regularization Cost(h) = EmpiricalLoss(h) + λ Complexity(h) Outline Machine Learning Classification (Na?ve Bayes) Regression (Linear, Smoothing) Linear Separation (Perceptron, SVMs) Non-parametric classification (KNN) Linear Separator Perceptron Perceptron Algorithm Start with random w0, w1 Pick training example x,y Update (α is learning rate) w1 ? w1+α(y-f(x))x w0 ? w0+α(y-f(x)) Converges to linear separator (if exists) Picks “a” linear separator (a good one?) What Linear Separator to Pick? What Linear Separator to Pick? Maximizes the “margin” Support Vector Machines Non-Separable Data? Not linearly separable for x1, x2 What if we add a feature? x3= x12+x22 See: “Kernel Trick” X1 X2 X3 Outline Machine Learning Classification (Na?ve Bayes) Regression (Linear, Smoothing) Linear Separation (Perceptron, SVMs) Non-parametric classification (KNN) Nonparametric Models If the process of learning good values for parameters is prone to over

文档评论(0)

js1180 + 关注
实名认证
文档贡献者

该用户很懒,什么也没介绍

1亿VIP精品文档

相关文档