w 2 (i) w 3 (i) -1 资讯工程所医学影像处理实验室.ppt

w 2 (i) w 3 (i) -1 资讯工程所医学影像处理实验室.ppt

  1. 1、本文档共60页,可阅读全部内容。
  2. 2、原创力文档(book118)网站文档一经付费(服务费),不意味着购买了该文档的版权,仅供个人/单位学习、研究之用,不得用于商业用途,未经授权,严禁复制、发行、汇编、翻译或者网络传播等,侵权必究。
  3. 3、本站所有内容均由合作方或网友上传,本站不对文档的完整性、权威性及其观点立场正确性做任何保证或承诺!文档内容仅供研究参考,付费前请自行鉴别。如您付费,意味着您自己接受本站规则且自行承担风险,本站不退款、不进行额外附加服务;查看《如何避免下载的几个坑》。如果您已付费下载过本站文档,您可以点击 这里二次下载
  4. 4、如文档侵犯商业秘密、侵犯著作权、侵犯人身权等,请点击“版权申诉”(推荐),也可以打举报电话:400-050-0827(电话支持时间:9:00-18:30)。
查看更多
w 2 (i) w 3 (i) -1 资讯工程所医学影像处理实验室

Chapter 3 Single-Layer Perceptrons Adaptive Filter Problem 在動態系統(dynamic system)中,其數學特徵是未知的,在系統中我們所知道的只有一組由系統產生的labeled input-output data. 也就是說,當一個m-dimension的輸入x(i)輸入到系統中,系統會產生對應的輸出d(i) 。 因此系統的外部行為可表示成 Adaptive Filter Problem (cont.) 問題在於如何設計一多輸入單一輸出的模型? The neural model operates under the influence of an algorithm that controls necessary adjustments to the synaptic weights of the neuron. The algorithm starts from an arbitrary setting of the neuron’s synaptic weights. Adjustments to the synaptic weights, in response to statistical variations in the system’s behavior, are made on a continuous basis. Computations of adjustments to the synaptic weights are completed inside a time interval that is one sampling period long. Adaptive model consists of two continuous processes Filtering process, which involves the computation of two signals. An output, and an error signal Adaptive process Automatic adjustment of the synaptic weights of the neuron in accordance with the error signal e(i). Adaptive Filter Problem (cont.) The output y(i) is the same as the induced local field v(i) Eq(3.2)可表示成向量的內積形式 where The neuron’s output y(i) is compared to the corresponding output d(i) Unconstrained Optimization Techniques 若一成本函數(cost function)E(w)對權重向量w是連續可微,則adaptive filtering algorithm的目的在於選擇一權重向量w,具有最小的成本。 若最佳的權重向量為w*,則須滿足 Minimize the cost function E (w) with respect to the weight vector w. The necessary condition for optimality is where gradient operator is the gradient vector of the cost function is Unconstrained Optimization Techniques Local iterative descent Starting with an initial guess denoted by w(0), generate a sequence of weight vectors w(1), w(2),…,such that the cost function E(w) is reduced at each iteration of the algorithm where w(n) is the old value of the weight vector and w(n+1) is its updated value. We hope that the algorithm will eventually converge onto the optimal solution w*. Method of steepest Descent The successive adjustments applied to the w

您可能关注的文档

文档评论(0)

138****7331 + 关注
实名认证
内容提供者

该用户很懒,什么也没介绍

1亿VIP精品文档

相关文档