- 1、原创力文档(book118)网站文档一经付费(服务费),不意味着购买了该文档的版权,仅供个人/单位学习、研究之用,不得用于商业用途,未经授权,严禁复制、发行、汇编、翻译或者网络传播等,侵权必究。。
- 2、本站所有内容均由合作方或网友上传,本站不对文档的完整性、权威性及其观点立场正确性做任何保证或承诺!文档内容仅供研究参考,付费前请自行鉴别。如您付费,意味着您自己接受本站规则且自行承担风险,本站不退款、不进行额外附加服务;查看《如何避免下载的几个坑》。如果您已付费下载过本站文档,您可以点击 这里二次下载。
- 3、如文档侵犯商业秘密、侵犯著作权、侵犯人身权等,请点击“版权申诉”(推荐),也可以打举报电话:400-050-0827(电话支持时间:9:00-18:30)。
- 4、该文档为VIP文档,如果想要下载,成为VIP会员后,下载免费。
- 5、成为VIP后,下载本文档将扣除1次下载权益。下载后,不支持退款、换文档。如有疑问请联系我们。
- 6、成为VIP后,您将拥有八大权益,权益包括:VIP文档下载权益、阅读免打扰、文档格式转换、高级专利检索、专属身份标志、高级客服、多端互通、版权登记。
- 7、VIP文档为合作方或网友上传,每下载1次, 网站将根据用户上传文档的质量评分、类型等,对文档贡献者给予高额补贴、流量扶持。如果你也想贡献VIP文档。上传文档
查看更多
哈工大深圳机器学习复习4_丁宇新.doc
Machine Learning
Question one:
(举一个例子,比如:导航仪、西洋跳棋)
Question two:
Initilize: G={?,?,?,?,?,?} S={,,,,,}
Step 1:
G={?,?,?,?,?,?} S={sunny,warm,normal,strong,warm,same}
Step2: coming one positive instance 2
G={?,?,?,?,?,?} S={sunny,warm,?,strong,warm,same}
Step3: coming one negative instance 3
G=Sunny,?,?,?,?,? ?,warm,?,?,?,? ?,?,?,?,?,same
S={sunny,warm,?,strong,warm,same}
Step4: coming one positive instance 4
S= { sunny,warm,?,strong,?,? }
G=Sunny,?,?,?,?,? ?,warm,?,?,?,?
Question three:
Entropy(S)=og(3/5) og(2/5)= 0.971
Gain(S,sky) = Entropy(S) –[(4/5) Entropy(Ssunny) + (1/5) Entropy(Srainny)] = 0.322
Gain(S,AirTemp) = Gain(S,wind) = Gain(S,sky) =0.322
Gain(S,Humidity) = Gain(S,Forcast) = 0.02
Gain(S,water) = 0.171
Choose any feature of AirTemp, wind and sky as the top node.
The decision tree as follow: (If choose sky as the top node)
Question Four:
Answer:
Inductive bias: give some proor assumption for a target concept made by the learner to have a basis for classifying unseen instances.
Suppose L is a machine learning algorithm and x is a set of training examples. L(xi, Dc) denotes the classification assigned to xi by L after training examples on Dc. Then the inductive bias is a minimal set of assertion B, given an arbitrary target concept C and set of training examples Dc: (Xi ) [(BDcXi) -| L(xi, Dc)]
C_E: the target concept is contained in the given gypothesis space H, and the training examples are all positive examples.
ID3: a, small trees are preferred over larger trees.
B, the trees that place high information gain attribute close to root are preferred over those that do not.
BP:Smooth interpolation beteen data points.
Question Five:
Answer: In na?ve bayes classification, we assump that all attributes are independent given the tatget value, while in bayes belif net, it specifes a set of conditional independence along with a set of probability distribution.
Question Six: 随即梯度下降算法
Question Seven:朴素贝叶斯例子
Question Eigh
原创力文档


文档评论(0)