- 1、本文档共29页,可阅读全部内容。
- 2、原创力文档(book118)网站文档一经付费(服务费),不意味着购买了该文档的版权,仅供个人/单位学习、研究之用,不得用于商业用途,未经授权,严禁复制、发行、汇编、翻译或者网络传播等,侵权必究。
- 3、本站所有内容均由合作方或网友上传,本站不对文档的完整性、权威性及其观点立场正确性做任何保证或承诺!文档内容仅供研究参考,付费前请自行鉴别。如您付费,意味着您自己接受本站规则且自行承担风险,本站不退款、不进行额外附加服务;查看《如何避免下载的几个坑》。如果您已付费下载过本站文档,您可以点击 这里二次下载。
- 4、如文档侵犯商业秘密、侵犯著作权、侵犯人身权等,请点击“版权申诉”(推荐),也可以打举报电话:400-050-0827(电话支持时间:9:00-18:30)。
查看更多
Let’s Verify Step by Step
Hunter Lightman∗ Vineet Kosaraju∗ Yura Burda∗ Harri Edwards
Bowen Baker Teddy Lee Jan Leike John Schulman Ilya Sutskever
Karl Cobbe∗
OpenAI
Abstract
In recent years, large language models have greatly improved in their
ability to perform complex multi-step reasoning. However, even state-
of-the-art models still regularly produce logical mistakes. To train more
reliable models, we can turn either to outcome supervision, which provides
feedback for a final result, or process supervision, which provides feedback
for each intermediate reasoning step. Given the importance of training
reliable models, and given the high cost of human feedback, it is impor-
tant to carefully compare the both methods. Recent work has already
begun this comparison, but many questions still remain. We conduct our
own investigation, finding that process supervision significantly outper-
forms outcome supervision for training models to solve problems from the
challenging MATH dataset. Our process-supervised model solves 78% of
problems from a representative subset of the MATH test set. Additionally,
we show that active learning significantly improves the efficacy of process
supervision. To support related research, we also release PRM800K, the
complete dataset of 800,000 step-level human feedback labels used to train
our best reward model.
1 Introduction
Large language models are capable of solving tasks that require complex multi-
step reasoning by generating solutions in a step-by-step chain-of-thought format
(Nye et al.,
文档评论(0)