神经网络课4-123教案 Chapter 4 Multiple Feedforward Neural Network 研究生课件.pptVIP

  • 4
  • 0
  • 约1.48万字
  • 约 74页
  • 2018-01-29 发布于浙江
  • 举报

神经网络课4-123教案 Chapter 4 Multiple Feedforward Neural Network 研究生课件.ppt

神经网络课4-123教案 Chapter 4 Multiple Feedforward Neural Network 研究生课件

Chapter 4 Multiple Feedforward Neural Network 大作业: 编写BP算法程序(不限语言种类) 图示计算过程与结果 多入单出或多入多出实例验证 数据格式: x1 x2 x3 ……xn y1 y2 ym ………………………………. .……………………………… 作业1 Step 2: Introduce the input Ii into the neural network, and calculate all outputs from the first layer according to the equations:. Example: We introduce the input vector into the neural network and calculate outputs from layer 1: Note that f( ) here is again the sigmoid function Step 3: Knowing the output from the first layer, calculate outputs from the second layer, using the equation: Example: We calculate the output from each node in layer 2: Step 4: Knowing the output from layer B, calculate the output from layer C, using the equation: Where f ( ) is the same sigmoid function Example: We calculate the output from each node in layer 3: Step 5: Continue steps 1-4 for P number of training patterns presented to the input layer. Calculate the mean-squared error, E, according to the following equation: Where P is the number of training patterns presented to the input layer, n is the number of nodes on the output layer, dkp is the desired output value from the kth node in the pth training pattern, ckp is the actual output value from the kth node in the pth training pattern. Example: We are training the network with just one input pattern (P=1). With desired output values d1=1, d2=0, and d3=0, our total mean-squared error is: Step 6: Knowing the pth pattern, calculate ?3kp, the gradient-descent term for the kth node in the output layer for training pattern p. use the following equation: Where The partial derivative of the sigmoid function is: Note that xk is the sum of the weighted inputs to the kth node on the output layer plus the bias function(i.e., for the pth training session): Where for training pattern p, bjp is the output value of the jth node in the hidden layer and T3kp is the threshold value on the output layer. Example: We calculate ?3k, the gradient-descent term for layer 3. To obtai

文档评论(0)

1亿VIP精品文档

相关文档