Gathering human feedback
RL-Teacher is an open-source implementation of our interface to train AIs via occasional human feedback rather than hand-crafted reward functions. The underlying technique was developed as a step towards safe AI systems, but also applies to reinforcement learning problems with rewards that are hard to specify.
The release contains three main components:
A reward predictor(opens in a new window) that can be plugged into any agent and learns to predict the actions the agent could take that a human would approve of.
An example agent(opens in a new window) that learns via a function specified by a reward predictor. RL-Teacher ships with three pre-integrated algorithms, including OpenAI Baselines PPO(opens in a new window).
A web-app(opens in a new window) that humans can use to give feedback, providing the data used to train the reward predictor.
The entire system consists of less than 1,000 lines of Python code (excluding the agents). After you’ve set up your web server you can launch an experiment by running:
1$ python rl_teacher/teach.py -p human --pretrain_labels 175 -e Reacher-v1 -n human-175
Humans can give feedback via a simple web interface (shown above), which can be run locally (not recommended) or on a separate machine. Full documentation is available on the project’s GitHub repository(opens in a new window). We’re excited to see what AI researchers and engineers do with this technology—please get in touch with any experimental results!