- 6
- 0
- 约1.29千字
- 约 9页
- 2017-01-29 发布于重庆
- 举报
感知器神经网络.
感知器神经网:
clear all
close all
P=[0 0 1 1;0 1 0 1]
T=[0 1 1 1]
net=newp(minmax(P),1,hardlim,learnp)
net=train(net,P,T)
Y=sim(net,P)
plotpv(P,T)
plotpc(net.iw{1,1},net.b{1})
性神经网络clear all
close all
P=[1.0 2.1 3 4]
T=[2.0 4.01 5.9 8.0]
lr=maxlinlr(P)
net=newlin(minmax(P),1,0,lr)
net.trainParam.epochs=300
net.trainParam.goal=0.05
net=train(net,P,T)
Y=sim(net,P)
clear all
close all
P=[1.1 2.2 3]
T=[2.1 4.3 5.9]
net=newlind(P,T)
Y=sim(net,P)
x=-4:.1:4
y=tansig(x)
plot(x,y,^-r)
Sigmoid
As a reminder:?σ(x)=1/1+exp(?x)
Its derivative:?ddxσ(x)=(1?σ(x))?σ(x)
The problem here is?exp, which quickly goes to infinity, even though the result ofσ?is restricted to the interval?[0,?1]. The solution: The sigmoid can be expressed in terms of?tanh:?σ(x)=1/2(1+tanh(x/2)).
Softmax
Softmax, which is defined as?softmaxi(a)=exp(ai)/∑jexp(aj)?(where?a?is a vector), is a little more complicated. The key here is to expresssoftmax?in terms of the?logsumexp?function:?logsumexp(a)?=?log(∑?iexpai), for which good, non-overflowing implementations are usually available.
Then, we have?softmax(a)?=?exp(a???logsumexp(a)).
As a bonus: The vector of partial derivatives / the gradient of?softmax?is analogous to the sigmoid, i.e??/?aisoftmax(a)=(1?softmaxi(a))?softmaxi(a).
您可能关注的文档
最近下载
- 《赋得古原草送别》古诗原文及鉴赏.pdf VIP
- Q_GDW 10450-2021 隔离开关和接地开关状态评价导则.docx VIP
- 2025年高职提招数学考试题及答案详解.doc VIP
- 2025年高考数学专项复习:解三角形(两大易错点+九大题型)(学生版+解析,新高考专用).pdf VIP
- 小型旋耕机设计.doc VIP
- EPC项目设计相关强制性技术标准的执行.docx VIP
- 2025年江苏航空职业技术学院单招《英语》测试卷含答案详解(培优A卷).docx VIP
- Q-GDW10460-2025 电容式电压互感器、耦合电容器状态评价导则.pdf VIP
- 胆汁淤积性肝病中医诊疗指南-公示稿.pdf VIP
- 不同因素对UHPC流动性和力学性能的影响及机理分析.pdf VIP
原创力文档

文档评论(0)