- 1、原创力文档(book118)网站文档一经付费(服务费),不意味着购买了该文档的版权,仅供个人/单位学习、研究之用,不得用于商业用途,未经授权,严禁复制、发行、汇编、翻译或者网络传播等,侵权必究。。
- 2、本站所有内容均由合作方或网友上传,本站不对文档的完整性、权威性及其观点立场正确性做任何保证或承诺!文档内容仅供研究参考,付费前请自行鉴别。如您付费,意味着您自己接受本站规则且自行承担风险,本站不退款、不进行额外附加服务;查看《如何避免下载的几个坑》。如果您已付费下载过本站文档,您可以点击 这里二次下载。
- 3、如文档侵犯商业秘密、侵犯著作权、侵犯人身权等,请点击“版权申诉”(推荐),也可以打举报电话:400-050-0827(电话支持时间:9:00-18:30)。
查看更多
Markov Models.ppt
Markov Models Introduction toArtificial Intelligence COS302 Michael L. Littman Fall 2001 Administration Hope your midterm week is going well. Breaking the Bank Assistant professor: $20k/year. How much total cash? 20+20+20+20+… = infinity! Nice, eh? Discounted Rewards Idea: The promise of future payment is not worth quite as much as payment now. Inflation / investing Chance of game “ending” Ex. $10k next year might be worth $10k x 0.9 today. Infinite Sum Assuming a discount rate of 0.9, how much does the assistant professor get in total? 20 + .9 20 + .92 20 + .93 20 + … = 20 + .9 (20 + .9 20 + .92 20 + …) x = 20 + .9 x x = 20/(.1) = 200 Academic Life Solving for Total Reward L(i) is expected total reward received starting in state i. How could we compute L(A)? Would it help to compute L(B), L(T), L(S), and L(D) also? Working Backwards Reincarnation? System of Equations L(A) = 20 + .9(.6 L(A) + .2 L(B) + .2 L(S)) L(B) = 60 + .9(.6 L(B) + .2 L(S) + .2 L(T)) L(S) = 10 + .9(.7 L(S) + .3 L(D)) L(T) = 100 + .9(.7 L(T) + .3 L(D)) L(D) = 0 + .9 (.5 L(D) + .5 L(A)) Transition Matrix Let P be the matrix of probs: Pij = Pr(next = j | current = i) Matrix Equation L = R + g P L Solving the Equation L = R + g P L L - g P L = R I L - g P L = R (introduce identity) (I - g P) L = R (I - g P)-1 (I - g P) L = (I - g P)-1 R L = (I - g P)-1 R Matrix inversion, matrix-vec mult. Markov Chain Set of states, transitions from state to state. Transitions only depend on current state, not the history: Markov property. What Does a MC Do? MC Problems Probability of going from s to s’ in t steps. Probability of going from s to s’ in t or fewer steps. Averaged over t steps (in the limit), how often in state s’ starting at s? How many steps from s to s’, on average? Given reward values, expected discounted reward starting at s. Examples Queuing system: Expected queue length, time until queue fills up. Chutes Ladders: Avg game time. Genetic Algs: Time to find opt
文档评论(0)