- 1、原创力文档(book118)网站文档一经付费(服务费),不意味着购买了该文档的版权,仅供个人/单位学习、研究之用,不得用于商业用途,未经授权,严禁复制、发行、汇编、翻译或者网络传播等,侵权必究。。
- 2、本站所有内容均由合作方或网友上传,本站不对文档的完整性、权威性及其观点立场正确性做任何保证或承诺!文档内容仅供研究参考,付费前请自行鉴别。如您付费,意味着您自己接受本站规则且自行承担风险,本站不退款、不进行额外附加服务;查看《如何避免下载的几个坑》。如果您已付费下载过本站文档,您可以点击 这里二次下载。
- 3、如文档侵犯商业秘密、侵犯著作权、侵犯人身权等,请点击“版权申诉”(推荐),也可以打举报电话:400-050-0827(电话支持时间:9:00-18:30)。
- 4、该文档为VIP文档,如果想要下载,成为VIP会员后,下载免费。
- 5、成为VIP后,下载本文档将扣除1次下载权益。下载后,不支持退款、换文档。如有疑问请联系我们。
- 6、成为VIP后,您将拥有八大权益,权益包括:VIP文档下载权益、阅读免打扰、文档格式转换、高级专利检索、专属身份标志、高级客服、多端互通、版权登记。
- 7、VIP文档为合作方或网友上传,每下载1次, 网站将根据用户上传文档的质量评分、类型等,对文档贡献者给予高额补贴、流量扶持。如果你也想贡献VIP文档。上传文档
查看更多
Learning Processes
jkim@cs.kaist.ac.kr Learning Processes CS679 Lecture Note by Jin Hyung Kim Computer Science Department KAIST Learning Learning is a process by which free parameters of NN are adapted thru stimulation from environment Sequence of Events stimulated by an environment undergoes changes in its free parameters responds in a new way to the environment Learning Algorithm prescribed steps of process to make a system learn ways to adjust synaptic weight of a neuron No unique learning algorithms - kit of tools The Chapter covers five learning rules, learning paradigms, issues of learning task probabilistic and statistical aspect of learning Error Correction Learning(I) Error signal, ek(n) ek(n) = dk(n) - yk(n) where n denotes time step Error signal activates a control mechanism for corrective adjustment of synaptic weights Mininizing a cost function, E(n), or index of performance Also called instantaneous value of error energy step-by-step adjustment until system reaches steady state; synaptic weights are stabilized Also called deltra rule, Widrow-Hoff rule Error Correction Learning(II) ?wkj(n) = ?ek(n)xj(n) ? : rate of learning; learning-rate parameter wkj(n+1) = wkj(n) + ?wkj(n) wkj(n) = Z-1[wkj(n+1) ] Z-1 is unit-delay operator adjustment is proportioned to the product of error signal and input signal error-correction learning is local The learning rate ? determines the stability or convergence Memory-based Learning Past experiences are stored in memory of correctly classified input-output examples retrieve and analyze “local neighborhood” Essential Ingredient Criterion used for defining local neighbor Learning rule applied to the training examples Nearest Neighbor Rule (NNR) the vector X’n ? { X1, X2, …,XN } is the nearest neighbor of Xtest if X’n is the class of Xtest Nearest Neighbor Rule Cover and Hart Examples are independent and identically distributed The sample size N is infinitely large then, error(NNR) 2 * error(Bayesian rule) Half of
文档评论(0)