- 6
- 0
- 约11.84万字
- 约 18页
- 2025-04-09 发布于广东
- 举报
OptimizinggenerativeAIbybackpropagating
languagemodelfeedback
/10.1038/s41586-025-08661-41,4 ✉1,42,42,42,4
MertYuksekgonul,FedericoBianchi,JosephBoen,ShengLiu,PanLu,
2,4,CarlosGuestrin1,3JamesZou1,2,3 ✉
ZhiHuang
Received:12June2024
Accepted:16January2025
Recentbreakthroughsinartifcialintelligence(AI)
areincreasinglydrivenby
Publishedonline:19March2025
systemsorchestratingmultiplelargelanguagemodels(LLMs)
andotherspecialized
Checkforupdatestools,suchassearchenginesandsimulators.Sofar,thesesystemsareprimarily
handcraftedbydomainexpertsandtweakedthroughheuristicsratherthanbeing
automaticallyoptimized,presentingasubstantialchallengetoacceleratingprogress.
Thedevelopmentofartifcialneuralnetworksfacedasimilarchallengeuntil
backpropagationandautomaticdiferentiationtransformedthefeldbymaking
optimizationturnkey.Analogously,hereweintroduceTextGrad,aversatile
frameworkthatperformsoptimizationbybackpropagatingLLM-generatedfeedback
toimproveAIsystems.Byleveragingnaturallanguagefeedbacktocritiqueand
原创力文档

文档评论(0)