Skip to main content

June 28, 2017

Faster physics in Python

We’re open-sourcing a high-performance Python library for robotic simulation using the MuJoCo engine, developed over our past year of robotics research.

Eight 3D renders of the same robot arm in varying color assortments

This library is one of our core tools for deep learning robotics research(opens in a new window), which we’ve now released as a major version of mujoco-py(opens in a new window), our Python 3 bindings for MuJoCo. mujoco-py in a new window) brings a number of new capabilities and significant performance boosts. New features include:

  • Efficient handling of parallel simulations

  • GPU-accelerated headless 3D rendering

  • Direct access to MuJoCo functions and data structures

  • Support for all MuJoCo 1.50 features(opens in a new window) like its improved contact solver

Batched simulation

Many methods in trajectory optimization and reinforcement learning (like LQR(opens in a new window)PI2(opens in a new window), and TRPO(opens in a new window)) benefit from being able to run multiple simulations in parallel. mujoco-py uses data parallelism through OpenMP(opens in a new window) and direct-access memory management through Cython(opens in a new window) and NumPy(opens in a new window) to make batched simulation more efficient.

Naive usage(opens in a new window) of the new version’s MjSimPool(opens in a new window) interface shows a 400% speedup over the old, and still about 180% over an optimized and restricted usage pattern using Python’s multiprocessing package(opens in a new window) to gain the same level of parallelism. The majority of the speedup comes from reduced access times to the various MuJoCo data structures. Check out [examples/]( for a tour of MjSimPool.

High performance texture randomization


We use the domain randomization(opens in a new window) technique across many projects at OpenAI. The latest version of mujoco-py supports headless GPU rendering; this yields a speedup of ~40x compared to CPU-based rendering, letting us generate hundreds of frames per second of synthetic image data. In the above (slowed down) animation we use this to vary the textures of one of our robots, which helps it identify its body when we transfer it from the simulator to reality. Check out examples/ in a new window) for an example of randomized texture generation.

Virtual Reality with mujoco-py

The API exposed by mujoco-py is sufficient to enable Virtual Reality interaction without any extra C++ code. We ported MuJoCo’s C++ VR example(opens in a new window) to Python using mujoco-py. If you have an HTC Vive VR setup, you can give try it using this example(opens in a new window) (this support is considered experimental, but we’ve been using it internally for a while).

API and usage

The simplest way to get started with mujoco-py is with the MjSim class(opens in a new window). It is a wrapper around the simulation model and data, and lets you to easily step the simulation and render images from camera sensors. Here’s a simple example:

from mujoco_py import load_model_from_path, MjSim

model = load_model_from_path("xmls/tosser.xml")  

sim = MjSim(model)
# => [ -1.074e-05   1.043e-04  -3.923e-05   0.000e+00   0.000e+00]

For advanced users, we provide a number of lower-level interfaces for accessing the innards of the MuJoCo C structs and functions directly. Refer to the README(opens in a new window) and the full documentation(opens in a new window) to learn more.