- 1、本文档共33页,可阅读全部内容。
- 2、原创力文档(book118)网站文档一经付费(服务费),不意味着购买了该文档的版权,仅供个人/单位学习、研究之用,不得用于商业用途,未经授权,严禁复制、发行、汇编、翻译或者网络传播等,侵权必究。
- 3、本站所有内容均由合作方或网友上传,本站不对文档的完整性、权威性及其观点立场正确性做任何保证或承诺!文档内容仅供研究参考,付费前请自行鉴别。如您付费,意味着您自己接受本站规则且自行承担风险,本站不退款、不进行额外附加服务;查看《如何避免下载的几个坑》。如果您已付费下载过本站文档,您可以点击 这里二次下载。
- 4、如文档侵犯商业秘密、侵犯著作权、侵犯人身权等,请点击“版权申诉”(推荐),也可以打举报电话:400-050-0827(电话支持时间:9:00-18:30)。
查看更多
Continuous-Space
Planning and Acting in Partially Observable Stochastic DomainsLesli P. Kaelbling, Michael L.Littman, Anthony R.Cassandra CS594 Automated decision making course presentation Professor: Piotr. Hui Huang University of Illinois, Chicago Oct 30, 2002 Outline MDP POMDP Basic POMDP ? Belief Space MDP Policy Tree Piecewise Linear Value Function Tiger Problem MDP Model Policy Value Function Which action, a, should the agent take? In MDPs: Policy is a mapping from state to action, ?: S ? A Value Function V?t(S) given a policy ? The expected sum of reward gained from starting in state s executing non-stationary policy ? for t steps. Relation Value function is evaluation for policy based no the long-run value that agent expects to gain from execute the policy. Optimization (MDPs) Recursively calculate expected long-term reward for each state/belief: Find the action that maximizes the expected reward: POMDP: UNCERTAINTY A broad perspective What are POMDPs? Belief state Probability distributions over states of the underlying MDP The agent keeps an internal belief state, b, that summarizes its experience. The agent uses a state estimator, SE, for updating the belief state b’ based on the last action at-1, the current observation ot, and the previous belief state b. Belief state is a sufficient statistic (it satisfies the Markov proerty) 1D belief space for a 2 state POMDP POMDP ? Continuous-Space Belief MDP a POMDP can be seen as a continuous-space “belief MDP”, as the agent’s belief is encoded through a continuous “belief state”. We may solve this belief MDP like before using value iteration algorithm to find the optimal policy in a continuous space. However, some adaptations for this algorithm are needed. Belief MDP The policy of a POMDP maps the current belief state into an action. As the belief state holds all relevant information about the past, the optimal policy of the POMDP is the the solution of (continuous-space) belief MDP. A b
您可能关注的文档
最近下载
- 土地复垦可行性分析zhouqi.docx VIP
- 国开2021《Web开发基础》形考任务1-5题目汇总.doc VIP
- 四、 中国近代化的探索 教学设计 2023~2024学年统编版八年级历史上册.docx
- 2021需氧菌性阴道炎诊治专家共识.pptx VIP
- 小红书2025好势发生营销IP新版图通案.pdf
- 传统村落保护与发展规划.ppt VIP
- 国开2021《Web开发基础》形考任务1-5题目汇总.docx VIP
- 2023人教版(PEP)小学英语(三、四、五、六年级)词汇及常用表达法(课本同步).pdf VIP
- 日立电梯HGE乘客电梯调试指导手册.pdf
- 风电场运维安全管理.pptx VIP
文档评论(0)