logo

您所在位置网站首页 > 海量文档  > 等级考试 > 研究生考试

作业4解答机器学习课程.pdf 12页

本文档一共被下载: ,您可全文免费在线阅读后下载本文档。

  • 支付并下载
  • 收藏该文档
  • 百度一下本文档
  • 修改文档简介
全屏预览

下载提示

1.本站不保证该用户上传的文档完整性,不预览、不比对内容而直接下载产生的反悔问题本站不予受理。
2.该文档所得收入(下载+内容+预览三)归上传者、原创者。
3.登录后可充值,立即自动返金币,充值渠道很便利
CS229 Problem Set #4 Solutions 1 CS 229, Public Course Problem Set #4 Solutions: Unsupervised Learn- ing and Reinforcement Learning 1. EM for supervised learning In class we applied EM to the unsupervised learning setting. In particular, we represented p (x) by marginalizing over a latent random variable p (x) = p (x, z) = p (x |z)p (z). z z However, EM can also be applied to the supervised learning setting, and in this problem we discuss a “mixture of linear regressors” model; this is an instance of what is often call the Hierarchical Mixture of Experts model. We want to represent p (y |x), x ∈ Rn and y ∈ R, and we do so by again introducing a discrete latent random variable p (y |x) = p (y, z |x) = p (y |x, z)p (z |x). z z For simplicity we’ll assume that z is binary valued, that p (y |x, z) is a Gaussian density, and that p (z |x) is given by a logistic regression model. More formally p (z |x; φ) = g (φT x)z (1 − g (φT x))1−z 1 −(y − θT x)2 i √ p (y |x, z = i; θ ) = exp i = 1, 2 i 2πσ 2σ2 where σ is a known parameter and φ, θ , θ ∈ Rn are parameters of the model (here we 0 1

发表评论

请自觉遵守互联网相关的政策法规,严禁发布色情、暴力、反动的言论。
用户名: 验证码: 点击我更换图片

“原创力文档”前称为“文档投稿赚钱网”,本站为“文档C2C交易模式”,即用户上传的文档直接卖给(下载)用户,本站只是中间服务平台,本站所有文档下载所得的收益归上传人(含作者)所有【成交的100%(原创)】。原创力文档是网络服务平台方,若您的权利被侵害,侵权客服QQ:3005833200 电话:19940600175 欢迎举报,上传者QQ群:784321556