强化学习演示课件.pptVIP

  1. 1、原创力文档(book118)网站文档一经付费(服务费),不意味着购买了该文档的版权,仅供个人/单位学习、研究之用,不得用于商业用途,未经授权,严禁复制、发行、汇编、翻译或者网络传播等,侵权必究。。
  2. 2、本站所有内容均由合作方或网友上传,本站不对文档的完整性、权威性及其观点立场正确性做任何保证或承诺!文档内容仅供研究参考,付费前请自行鉴别。如您付费,意味着您自己接受本站规则且自行承担风险,本站不退款、不进行额外附加服务;查看《如何避免下载的几个坑》。如果您已付费下载过本站文档,您可以点击 这里二次下载
  3. 3、如文档侵犯商业秘密、侵犯著作权、侵犯人身权等,请点击“版权申诉”(推荐),也可以打举报电话:400-050-0827(电话支持时间:9:00-18:30)。
  4. 4、该文档为VIP文档,如果想要下载,成为VIP会员后,下载免费。
  5. 5、成为VIP后,下载本文档将扣除1次下载权益。下载后,不支持退款、换文档。如有疑问请联系我们
  6. 6、成为VIP后,您将拥有八大权益,权益包括:VIP文档下载权益、阅读免打扰、文档格式转换、高级专利检索、专属身份标志、高级客服、多端互通、版权登记。
  7. 7、VIP文档为合作方或网友上传,每下载1次, 网站将根据用户上传文档的质量评分、类型等,对文档贡献者给予高额补贴、流量扶持。如果你也想贡献VIP文档。上传文档
查看更多
演示文稿演讲PPT学习教学课件医学文件教学培训课件

蒙特卡罗控制 How to select Policies: (Similar to policy evaluation) MC policy iteration: Policy evaluation using MC methods followed by policy improvement Policy improvement step: greedify with respect to value (or action-value) function 时序差分学习 Temporal-Difference target: the actual return after time t target: an estimate of the return 时序差分学习 (TD) Idea: Do ADP backups on a per move basis, not for the whole state space. Theorem: Average value of U(i) converges to the correct value. Theorem: If ? is appropriately decreased as a function of times a state is visited (?=?[N[i]]), then U(i) itself converges to the correct value 时序差分学习 TD T T T T T T T T T T T T T T T T T T T T TD(l) – A Forward View TD(l) is a method for averaging all n-step backups weight by ln-1 (time since visitation) l-return: Backup using l-return: 时序差分学习算法 TD(?) Idea: update from the whole epoch, not just on state transition. Special cases: ?=1: Least-mean-square (LMS), Mont Carlo ?=0: TD Intermediate choice of ? (between 0 and 1) is best. Interplay with ? … 时序差分学习算法 时序差分学习算法收敛性TD(?) Theorem: Converges w.p. 1 under certain boundaries conditions. Decrease ?i(t) s.t. In practice, often a fixed ? is used for all i and t. 时序差分学习 TD Q-Learning Watkins,1989 Estimate the Q-function using some approximator (for example, linear regression or neural networks or decision trees etc.). Derive the estimated policy as an argument of the maximum of the estimated Q-function. Allow different parameter vectors at different time points. Let us illustrate the algorithm with linear regression as the approximator, and of course, squared error as the appropriate loss function. Q-learning Q (a,i) Direct approach (ADP) would require learning a model . Q-learning does not: Do this update after each state transition: Exploration Tradeoff between exploitation (control) and exploration (identification) Extremes: greedy vs. random acting (n-armed bandit models) Q-learning converges to optimal Q-values i

文档评论(0)

yuzongxu123 + 关注
实名认证
文档贡献者

该用户很懒,什么也没介绍

1亿VIP精品文档

相关文档