Skip to main content

September 23, 2021

Summarizing books with human feedback

Scaling human oversight of AI systems for tasks that are difficult to evaluate.

Summarizing Books

To safely deploy powerful, general-purpose artificial intelligence in the future, we need to ensure that machine learning models act in accordance with human intentions. This challenge has become known as the alignment problem.

A scalable solution to the alignment problem needs to work on tasks where model outputs are difficult or time-consuming for humans to evaluate. To test scalable alignment techniques, we trained a model to summarize entire books, as shown in the following samples.A Our model works by first summarizing small sections of a book, then summarizing those summaries into a higher-level summary, and so on.

Loading...

Our best model is fine-tuned from GPT-3 and generates sensible summaries of entire books, sometimes even matching the average quality of human-written summaries: it achieves a 6/7 rating (similar to the average human-written summary) from humans who have read the book 5% of the time and a 5/7 rating 15% of the time. Our model also achieves state-of-the-art results on the BookSum dataset(opens in a new window) for book-length summarization. A zero-shot question-answering model can use our model’s summaries to obtain competitive results on the NarrativeQA dataset(opens in a new window) for book-length question answering.B

Our approach: combining reinforcement learning from human feedback and recursive task decomposition

Consider the task of summarizing a piece of text. Large pretrained models aren’t very good at summarization. In the past we found that training a model with reinforcement learning from human feedback helped align model summaries with human preferences on short posts and articles. But judging summaries of entire books takes a lot of effort to do directly since a human would need to read the entire book, which takes many hours.

To address this problem, we additionally make use of recursive task decomposition: we procedurally break up a difficult task into easier ones. In this case we break up summarizing a long piece of text into summarizing several shorter pieces. Compared to an end-to-end training procedure, recursive task decomposition has the following advantages:

  1. Decomposition allows humans to evaluate model summaries more quickly by using summaries of smaller parts of the book rather than reading the source text.

  2. It is easier to trace the summary-writing process. For example, you can trace to find where in the original text certain events from the summary happen. See for yourself on our summary explorer(opens in a new window)!

  3. Our method can be used to summarize books of unbounded length, unrestricted by the context length of the transformer models we use.

Why we are working on this

This work is part of our ongoing research into aligning advanced AI systems, which is key to our mission. As we train our models to do increasingly complex tasks, making informed evaluations of the models’ outputs will become increasingly difficult for humans. This makes it harder to detect subtle problems in model outputs that could lead to negative consequences when these models are deployed. Therefore we want our ability to evaluate our models to increase as their capabilities increase.

Our current approach to this problem is to empower humans to evaluate machine learning model outputs using assistance from other models(opens in a new window). In this case, to evaluate book summaries we empower humans with individual chapter summaries written by our model, which saves them time when evaluating these summaries relative to reading the source text. Our progress on book summarization is the first large-scale empirical work on scaling alignment techniques.

Going forward, we are researching better ways to assist humans in evaluating model behavior, with the goal of finding techniques that scale to aligning artificial general intelligence.

We’re always looking for more talented people to join us; so if this work interests you, please apply to join our team!

Footnotes

  1. A

    These samples were selected from works in the public domain(opens in a new window), and are part of GPT-3′s pretraining data. To control for this effect, and purely for research purposes, our paper(opens in a new window) evaluates summaries of books the model has never seen before.

  2. B

    We’ve amended our original claim about results on NarrativeQA after being made aware of prior work with better results than ours.

Acknowledgments

We’d like to acknowledge our paper co-authors: Long Ouyang, Daniel Ziegler, Nisan Stiennon, and Paul Christiano.

Thanks to the following for feedback on this release: Steve Dowling, Hannah Wong, Miles Brundage, Gretchen Krueger, Ilya Sutskever, and Sam Altman.

Acknowledgments

Design: Justin Jay Wang

Book Cover Artwork: DALL·E