matlab 神经网络 例程(matlab 神经网络 例程).docVIP

matlab 神经网络 例程(matlab 神经网络 例程).doc

  1. 1、本文档共22页,可阅读全部内容。
  2. 2、原创力文档(book118)网站文档一经付费(服务费),不意味着购买了该文档的版权,仅供个人/单位学习、研究之用,不得用于商业用途,未经授权,严禁复制、发行、汇编、翻译或者网络传播等,侵权必究。
  3. 3、本站所有内容均由合作方或网友上传,本站不对文档的完整性、权威性及其观点立场正确性做任何保证或承诺!文档内容仅供研究参考,付费前请自行鉴别。如您付费,意味着您自己接受本站规则且自行承担风险,本站不退款、不进行额外附加服务;查看《如何避免下载的几个坑》。如果您已付费下载过本站文档,您可以点击 这里二次下载
  4. 4、如文档侵犯商业秘密、侵犯著作权、侵犯人身权等,请点击“版权申诉”(推荐),也可以打举报电话:400-050-0827(电话支持时间:9:00-18:30)。
  5. 5、该文档为VIP文档,如果想要下载,成为VIP会员后,下载免费。
  6. 6、成为VIP后,下载本文档将扣除1次下载权益。下载后,不支持退款、换文档。如有疑问请联系我们
  7. 7、成为VIP后,您将拥有八大权益,权益包括:VIP文档下载权益、阅读免打扰、文档格式转换、高级专利检索、专属身份标志、高级客服、多端互通、版权登记。
  8. 8、VIP文档为合作方或网友上传,每下载1次, 网站将根据用户上传文档的质量评分、类型等,对文档贡献者给予高额补贴、流量扶持。如果你也想贡献VIP文档。上传文档
查看更多
matlab 神经网络 例程(matlab 神经网络 例程)

matlab 神经网络 例程(matlab 神经网络 例程) % into training data and test data X = []; Y1= []; Y2= []; Y3= []; Y4= []; A=[]; B=[]; STR = {Test4,}; Data = textread ([str{1},.txt]); % reading training data For i=1:40 X = [X Data (i:end-41+i-10,1:1) Data (i:end-41+i-10,2:2) Data (i:end-41+i-10,3:3) Data (i:end-41+i-10,1:1)]; End Y1= Data (41:end-10,1:1); Y2= Data (41:end-10,2:2); Y3= Data (41:end-10,3:3); Y4= Data (41:end-10,4:4); A=X (end:end, 1:end); %Input = Input; %Output = Output; X=X; Y3=Y3; Y2=Y2; Y1=Y1; Y4=Y4; Make the matrix% [X, Minx, Maxx, Y1, miny1, maxy1] = premnmx (X, Y1); % of standard data %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%5 % set the parameters of the neural network Para.Goal = 1; % network training error Para.Epochs = 800; % network training algebra Para.LearnRate = 0.8; % of the rate of learning %==== Para.Show = 200; % network display interval training Para.InRange = repmat ([-1 1], size (X, 1), 1); The input variable interval network% %Para.Neurons = [size (X, 1) *2+1 1]; Para.Neurons = [30 1]; % after two network layer neuron configuration Para.TransferFcn= {logsigpurelin}; % threshold function of each layer Para.TrainFcn =trainlm; % network training function assignment % traingd: gradient descent back propagation method % traingda: adaptive learning rate gradient descent method % traingdm: gradient descent with momentum % traingdx: % with momentum, adaptive learning rate gradient descent method Para.LearnFcn =learngdm; % network learning function Para.PerformFcn =sse; Network error function% Para.InNum = size (X, 1); % input dimension Para.IWNum = Para.InNum*Para.Neurons (1); The number of input weight% Para.LWNum = prod (Para.Neurons); The number of layer weight% Para.BiasNum = sum (Para.Neurons); % offset number %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% Net1 = newff (Para.InRange, Para.Neurons, Para.TransferFcn,... Para.TrainFcn, Para.LearnFcn, Para.PerformFcn); To establish a network of% %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% Ne

文档评论(0)

jgx3536 + 关注
实名认证
文档贡献者

该用户很懒,什么也没介绍

版权声明书
用户编号:6111134150000003

1亿VIP精品文档

相关文档