Some considerations on learning to explore via meta-reinforcement learning
Abstract
We consider the problem of exploration in meta reinforcement learning. Two new meta reinforcement learning algorithms are suggested: E-MAML and E-RL². Results are presented on a novel environment we call "Krazy World" and a set of maze environments. We show E-MAML and E-RL² deliver better performance on tasks where exploration is important.