\n",
"\n",
"This notebook provides an introductory tutorial for [**MuJoCo** physics](https://github.com/google-deepmind/mujoco#readme), using the native Python bindings.\n",
"\n",
""
]
},
{
"cell_type": "markdown",
"metadata": {
"id": "YvyGCsgSCxHQ"
},
"source": [
"# All imports"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"id": "Xqo7pyX-n72M"
},
"outputs": [],
"source": [
"!pip install mujoco\n",
"\n",
"# Set up GPU rendering.\n",
"from google.colab import files\n",
"import distutils.util\n",
"import os\n",
"import subprocess\n",
"if subprocess.run('nvidia-smi').returncode:\n",
" raise RuntimeError(\n",
" 'Cannot communicate with GPU. '\n",
" 'Make sure you are using a GPU Colab runtime. '\n",
" 'Go to the Runtime menu and select Choose runtime type.')\n",
"\n",
"# Add an ICD config so that glvnd can pick up the Nvidia EGL driver.\n",
"# This is usually installed as part of an Nvidia driver package, but the Colab\n",
"# kernel doesn't install its driver via APT, and as a result the ICD is missing.\n",
"# (https://github.com/NVIDIA/libglvnd/blob/master/src/EGL/icd_enumeration.md)\n",
"NVIDIA_ICD_CONFIG_PATH = '/usr/share/glvnd/egl_vendor.d/10_nvidia.json'\n",
"if not os.path.exists(NVIDIA_ICD_CONFIG_PATH):\n",
" with open(NVIDIA_ICD_CONFIG_PATH, 'w') as f:\n",
" f.write(\"\"\"{\n",
" \"file_format_version\" : \"1.0.0\",\n",
" \"ICD\" : {\n",
" \"library_path\" : \"libEGL_nvidia.so.0\"\n",
" }\n",
"}\n",
"\"\"\")\n",
"\n",
"# Configure MuJoCo to use the EGL rendering backend (requires GPU)\n",
"print('Setting environment variable to use GPU rendering:')\n",
"%env MUJOCO_GL=egl\n",
"\n",
"# Check if installation was succesful.\n",
"try:\n",
" print('Checking that the installation succeeded:')\n",
" import mujoco\n",
" mujoco.MjModel.from_xml_string('')\n",
"except Exception as e:\n",
" raise e from RuntimeError(\n",
" 'Something went wrong during installation. Check the shell output above '\n",
" 'for more information.\\n'\n",
" 'If using a hosted Colab runtime, make sure you enable GPU acceleration '\n",
" 'by going to the Runtime menu and selecting \"Choose runtime type\".')\n",
"\n",
"print('Installation successful.')\n",
"\n",
"# Other imports and helper functions\n",
"import time\n",
"import itertools\n",
"import numpy as np\n",
"\n",
"# Graphics and plotting.\n",
"print('Installing mediapy:')\n",
"!command -v ffmpeg >/dev/null || (apt update && apt install -y ffmpeg)\n",
"!pip install -q mediapy\n",
"import mediapy as media\n",
"import matplotlib.pyplot as plt\n",
"\n",
"# More legible printing from numpy.\n",
"np.set_printoptions(precision=3, suppress=True, linewidth=100)\n",
"\n",
"from IPython.display import clear_output\n",
"clear_output()\n"
]
},
{
"cell_type": "markdown",
"metadata": {
"id": "t0CF6Gvkt_Cw"
},
"source": [
"# MuJoCo basics\n",
"\n",
"We begin by defining and loading a simple model:"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"id": "3KJVqak6xdJa"
},
"outputs": [],
"source": [
"xml = \"\"\"\n",
"\n",
" \n",
" \n",
" \n",
" \n",
"\n",
"\"\"\"\n",
"model = mujoco.MjModel.from_xml_string(xml)"
]
},
{
"cell_type": "markdown",
"metadata": {
"id": "slhf39lGxvDI"
},
"source": [
"The `xml` string is written in MuJoCo's [MJCF](http://www.mujoco.org/book/modeling.html), which is an [XML](https://en.wikipedia.org/wiki/XML#Key_terminology)-based modeling language.\n",
" - The only required element is ``. The smallest valid MJCF model is `` which is a completely empty model.\n",
" - All physical elements live inside the `` which is always the top-level body and constitutes the global origin in Cartesian coordinates.\n",
" - We define two geoms in the world named `red_box` and `green_sphere`.\n",
" - **Question:** The `red_box` has no position, the `green_sphere` has no type, why is that?\n",
" - **Answer:** MJCF attributes have *default values*. The default position is `0 0 0`, the default geom type is `sphere`. The MJCF language is described in the documentation's [XML Reference chapter](https://mujoco.readthedocs.io/en/latest/XMLreference.html).\n",
"\n",
"The `from_xml_string()` method invokes the model compiler, which creates a binary `mjModel` instance."
]
},
{
"cell_type": "markdown",
"metadata": {
"id": "gf9h_wi9weet"
},
"source": [
"## mjModel\n",
"\n",
"MuJoCo's `mjModel`, contains the *model description*, i.e., all quantities which *do not change over time*. The complete description of `mjModel` can be found at the end of the header file [`mjmodel.h`](https://github.com/google-deepmind/mujoco/blob/main/include/mujoco/mjmodel.h). Note that the header files contain short, useful inline comments, describing each field.\n",
"\n",
"Examples of quantities that can be found in `mjModel` are `ngeom`, the number of geoms in the scene and `geom_rgba`, their respective colors:"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"id": "F40Pe6DY3Q0g"
},
"outputs": [],
"source": [
"model.ngeom"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"id": "MOIJG9pzx8cA"
},
"outputs": [],
"source": [
"model.geom_rgba"
]
},
{
"cell_type": "markdown",
"metadata": {
"id": "bzcLjdY23Kvp"
},
"source": [
"## Named access\n",
"\n",
"The MuJoCo Python bindings provide convenient [accessors](https://mujoco.readthedocs.io/en/latest/python.html#named-access) using names. Calling the `model.geom()` accessor without a name string generates a convenient error that tells us what the valid names are."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"id": "9AuTwbLFyJxQ"
},
"outputs": [],
"source": [
"try:\n",
" model.geom()\n",
"except KeyError as e:\n",
" print(e)"
]
},
{
"cell_type": "markdown",
"metadata": {
"id": "qkfLK3h2zrqr"
},
"source": [
"Calling the named accessor without specifying a property will tell us what all the valid properties are:"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"id": "9X95TlWnyEEw"
},
"outputs": [],
"source": [
"model.geom('green_sphere')"
]
},
{
"cell_type": "markdown",
"metadata": {
"id": "mS9qDLevKsJq"
},
"source": [
"Let's read the `green_sphere`'s rgba values:"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"id": "xsBlJAV7zpHb"
},
"outputs": [],
"source": [
"model.geom('green_sphere').rgba"
]
},
{
"cell_type": "markdown",
"metadata": {
"id": "8a8hswjjKyIa"
},
"source": [
"This functionality is a convenience shortcut for MuJoCo's [`mj_name2id`](https://mujoco.readthedocs.io/en/latest/APIreference.html?highlight=mj_name2id#mj-name2id) function:"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"id": "Ng92hNUoKnVq"
},
"outputs": [],
"source": [
"id = mujoco.mj_name2id(model, mujoco.mjtObj.mjOBJ_GEOM, 'green_sphere')\n",
"model.geom_rgba[id, :]"
]
},
{
"cell_type": "markdown",
"metadata": {
"id": "5WL_SaJPLl3r"
},
"source": [
"Similarly, the read-only `id` and `name` properties can be used to convert from id to name and back:"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"id": "2CbGSmRZeE5p"
},
"outputs": [],
"source": [
"print('id of \"green_sphere\": ', model.geom('green_sphere').id)\n",
"print('name of geom 1: ', model.geom(1).name)\n",
"print('name of body 0: ', model.body(0).name)"
]
},
{
"cell_type": "markdown",
"metadata": {
"id": "3RIizubaL_du"
},
"source": [
"Note that the 0th body is always the `world`. It cannot be renamed.\n",
"\n",
"The `id` and `name` attributes are useful in Python comprehensions:"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"id": "m3MtIE5F1K7s"
},
"outputs": [],
"source": [
"[model.geom(i).name for i in range(model.ngeom)]"
]
},
{
"cell_type": "markdown",
"metadata": {
"id": "t5hY0fyXFLcf"
},
"source": [
"## `mjData`\n",
"`mjData` contains the *state* and quantities that depend on it. The state is made up of time, [generalized](https://en.wikipedia.org/wiki/Generalized_coordinates) positions and generalized velocities. These are respectively `data.time`, `data.qpos` and `data.qvel`. In order to make a new `mjData`, all we need is our `mjModel`"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"id": "FV2Hy6m948nr"
},
"outputs": [],
"source": [
"data = mujoco.MjData(model)"
]
},
{
"cell_type": "markdown",
"metadata": {
"id": "-KmNuvlJ46u0"
},
"source": [
"`mjData` also contains *functions of the state*, for example the Cartesian positions of objects in the world frame. The (x, y, z) positions of our two geoms are in `data.geom_xpos`:"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"id": "CPwDcAQ0-uUE"
},
"outputs": [],
"source": [
"print(data.geom_xpos)"
]
},
{
"cell_type": "markdown",
"metadata": {
"id": "Sjst5xGXX3sr"
},
"source": [
"Wait, why are both of our geoms at the origin? Didn't we offset the green sphere? The answer is that derived quantities in `mjData` need to be explicitly propagated (see [below](#scrollTo=QY1gpms1HXeN)). In our case, the minimal required function is [`mj_kinematics`](https://mujoco.readthedocs.io/en/latest/APIreference.html#mj-kinematics), which computes global Cartesian poses for all objects (excluding cameras and lights)."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"id": "tfe0YeZRYNTr"
},
"outputs": [],
"source": [
"mujoco.mj_kinematics(model, data)\n",
"print('raw access:\\n', data.geom_xpos)\n",
"\n",
"# MjData also supports named access:\n",
"print('\\nnamed access:\\n', data.geom('green_sphere').xpos)"
]
},
{
"cell_type": "markdown",
"metadata": {
"id": "eU7uWNsTwmcZ"
},
"source": [
"# Basic rendering, simulation, and animation\n",
"\n",
"In order to render we'll need to instantiate a `Renderer` object and call its `render` method.\n",
"\n",
"We'll also reload our model to make the colab's sections independent."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"id": "xK3c0-UDxMrN"
},
"outputs": [],
"source": [
"xml = \"\"\"\n",
"\n",
" \n",
" \n",
" \n",
" \n",
"\n",
"\"\"\"\n",
"# Make model and data\n",
"model = mujoco.MjModel.from_xml_string(xml)\n",
"data = mujoco.MjData(model)\n",
"\n",
"# Make renderer, render and show the pixels\n",
"with mujoco.Renderer(model) as renderer:\n",
" media.show_image(renderer.render())"
]
},
{
"cell_type": "markdown",
"metadata": {
"id": "ZkFSHeYGxlT5"
},
"source": [
"Hmmm, why the black pixels?\n",
"\n",
"**Answer:** For the same reason as above, we first need to propagate the values in `mjData`. This time we'll call [`mj_forward`](https://mujoco.readthedocs.io/en/latest/APIreference/APIfunctions.html#mj-forward), which invokes the entire pipeline up to the computation of accelerations i.e., it computes $\\dot x = f(x)$, where $x$ is the state. This function does more than we actually need, but unless we care about saving computation time, it's good practice to call `mj_forward` since then we know we are not missing anything.\n",
"\n",
"We also need to update the `mjvScene` which is an object held by the renderer describing the visual scene. We'll later see that the scene can include visual objects which are not part of the physical model."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"id": "pvh47r97huS4"
},
"outputs": [],
"source": [
"with mujoco.Renderer(model) as renderer:\n",
" mujoco.mj_forward(model, data)\n",
" renderer.update_scene(data)\n",
"\n",
" media.show_image(renderer.render())"
]
},
{
"cell_type": "markdown",
"metadata": {
"id": "6oDW1dOUifw6"
},
"source": [
"This worked, but this image is a bit dark. Let's add a light and re-render."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"id": "iqzJj2NIr_2V"
},
"outputs": [],
"source": [
"xml = \"\"\"\n",
"\n",
" \n",
" \n",
" \n",
" \n",
" \n",
"\n",
"\"\"\"\n",
"model = mujoco.MjModel.from_xml_string(xml)\n",
"data = mujoco.MjData(model)\n",
"\n",
"with mujoco.Renderer(model) as renderer:\n",
" mujoco.mj_forward(model, data)\n",
" renderer.update_scene(data)\n",
"\n",
" media.show_image(renderer.render())"
]
},
{
"cell_type": "markdown",
"metadata": {
"id": "HS4K38Eirww9"
},
"source": [
"Much better!\n",
"\n",
"Note that all values in the `mjModel` instance are writable. While it's generally not recommended to do this but rather to change the values in the XML, because it's easy to make an invalid model, some values are safe to write into, for example colors:"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"id": "GBNcQVYJrt2h"
},
"outputs": [],
"source": [
"# Run this cell multiple times for different colors\n",
"model.geom('red_box').rgba[:3] = np.random.rand(3)\n",
"with mujoco.Renderer(model) as renderer:\n",
" renderer.update_scene(data)\n",
"\n",
" media.show_image(renderer.render())"
]
},
{
"cell_type": "markdown",
"metadata": {
"id": "-P95E-QHizQq"
},
"source": [
"# Simulation\n",
"\n",
"Now let's simulate and make a video. We'll use MuJoCo's main high level function `mj_step`, which steps the state $x_{t+h} = f(x_t)$.\n",
"\n",
"Note that in the code block below we are *not* rendering after each call to `mj_step`. This is because the default timestep is 2ms, and we want a 60fps video, not 500fps."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"id": "NdVnHOYisiKl"
},
"outputs": [],
"source": [
"duration = 3.8 # (seconds)\n",
"framerate = 60 # (Hz)\n",
"\n",
"# Simulate and display video.\n",
"frames = []\n",
"mujoco.mj_resetData(model, data) # Reset state and time.\n",
"with mujoco.Renderer(model) as renderer:\n",
" while data.time < duration:\n",
" mujoco.mj_step(model, data)\n",
" if len(frames) < data.time * framerate:\n",
" renderer.update_scene(data)\n",
" pixels = renderer.render()\n",
" frames.append(pixels)\n",
"\n",
"media.show_video(frames, fps=framerate)"
]
},
{
"cell_type": "markdown",
"metadata": {
"id": "tYN4sL9RnsCU"
},
"source": [
"Hmmm, the video is playing, but nothing is moving, why is that?\n",
"\n",
"This is because this model has no [degrees of freedom](https://www.google.com/url?sa=D&q=https%3A%2F%2Fen.wikipedia.org%2Fwiki%2FDegrees_of_freedom_(mechanics)) (DoFs). The things that move (and which have inertia) are called *bodies*. We add DoFs by adding *joints* to bodies, specifying how they can move with respect to their parents. Let's make a new body that contains our geoms, add a hinge joint and re-render, while visualizing the joint axis using the visualization option object `MjvOption`."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"id": "LbWf84VYst5m"
},
"outputs": [],
"source": [
"xml = \"\"\"\n",
"\n",
" \n",
" \n",
" \n",
" \n",
" \n",
" \n",
" \n",
" \n",
"\n",
"\"\"\"\n",
"model = mujoco.MjModel.from_xml_string(xml)\n",
"data = mujoco.MjData(model)\n",
"\n",
"# enable joint visualization option:\n",
"scene_option = mujoco.MjvOption()\n",
"scene_option.flags[mujoco.mjtVisFlag.mjVIS_JOINT] = True\n",
"\n",
"duration = 3.8 # (seconds)\n",
"framerate = 60 # (Hz)\n",
"\n",
"# Simulate and display video.\n",
"frames = []\n",
"mujoco.mj_resetData(model, data)\n",
"with mujoco.Renderer(model) as renderer:\n",
" while data.time < duration:\n",
" mujoco.mj_step(model, data)\n",
" if len(frames) < data.time * framerate:\n",
" renderer.update_scene(data, scene_option=scene_option)\n",
" pixels = renderer.render()\n",
" frames.append(pixels)\n",
"\n",
"media.show_video(frames, fps=framerate)"
]
},
{
"cell_type": "markdown",
"metadata": {
"id": "Ymv-tvWCpl6V"
},
"source": [
"Note that we rotated the `box_and_sphere` body by 30° around the Z (vertical) axis, with the directive `euler=\"0 0 -30\"`. This was made to emphasize that the poses of elements in the [kinematic tree](https://www.google.com/url?sa=D&q=https%3A%2F%2Fen.wikipedia.org%2Fwiki%2FKinematic_chain) are always with respect to their *parent body*, so our two geoms were also rotated by this transformation.\n",
"\n",
"Physics options live in `mjModel.opt`, for example the timestep:"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"id": "5yvAJokcpyX_"
},
"outputs": [],
"source": [
"model.opt.timestep"
]
},
{
"cell_type": "markdown",
"metadata": {
"id": "SdkwLeGUp9B2"
},
"source": [
"Let's flip gravity and re-render:"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"id": "ocjPQG8Dp2F-"
},
"outputs": [],
"source": [
"print('default gravity', model.opt.gravity)\n",
"model.opt.gravity = (0, 0, 10)\n",
"print('flipped gravity', model.opt.gravity)\n",
"\n",
"# Simulate and display video.\n",
"frames = []\n",
"mujoco.mj_resetData(model, data)\n",
"with mujoco.Renderer(model) as renderer:\n",
" while data.time < duration:\n",
" mujoco.mj_step(model, data)\n",
" if len(frames) < data.time * framerate:\n",
" renderer.update_scene(data, scene_option=scene_option)\n",
" pixels = renderer.render()\n",
" frames.append(pixels)\n",
"\n",
"media.show_video(frames, fps=60)"
]
},
{
"cell_type": "markdown",
"metadata": {
"id": "FsxDDgXBqg_J"
},
"source": [
"We could also have done this in XML using the top-level `
` clause to set the integrator to the 4th order [Runge Kutta](https://en.wikipedia.org/wiki/Runge%E2%80%93Kutta_methods). Runge-Kutta has a higher rate of convergence than the default Euler integrator, which in many cases increases the accuracy at a given timestep size.\n",
"3. We define the floor's grid material inside the `` clause and reference it in the `\"floor\"` geom.\n",
"4. We use an invisible and non-colliding box geom called `ballast` to move the top's center-of-mass lower. Having a low center of mass is (counter-intuitively) required for the flipping behavior to occur.\n",
"5. We save our initial spinning state as a *keyframe*. It has a high rotational velocity around the Z-axis, but is not perfectly oriented with the world, which introduces the symmetry-breaking required for the flipping.\n",
"6. We define a `` in our model, and then render from it using the `camera` argument to `update_scene()`.\n",
"Let us examine the state:\n"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"id": "o4S9nYhHOKmb"
},
"outputs": [],
"source": [
"print('positions', data.qpos)\n",
"print('velocities', data.qvel)"
]
},
{
"cell_type": "markdown",
"metadata": {
"id": "71UgzBAqWdtZ"
},
"source": [
"The velocities are easy to interpret, 6 zeros, one for each DoF. What about the length 7 positions? We can see the initial 2cm height of the body; the subsequent four numbers are the 3D orientation, defined by a *unit quaternion*. 3D orientations are represented with **4** numbers while angular velocities are **3** numbers. For more information see the Wikipedia article on [quaternions and spatial rotation](https://en.wikipedia.org/wiki/Quaternions_and_spatial_rotation).\n",
"\n",
"Let's make a video:"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"id": "5P4HkhKNGQvs"
},
"outputs": [],
"source": [
"duration = 7 # (seconds)\n",
"framerate = 60 # (Hz)\n",
"\n",
"# Simulate and display video.\n",
"frames = []\n",
"mujoco.mj_resetDataKeyframe(model, data, 0) # Reset the state to keyframe 0\n",
"with mujoco.Renderer(model) as renderer:\n",
" while data.time < duration:\n",
" mujoco.mj_step(model, data)\n",
" if len(frames) < data.time * framerate:\n",
" renderer.update_scene(data, \"closeup\")\n",
" pixels = renderer.render()\n",
" frames.append(pixels)\n",
"\n",
"media.show_video(frames, fps=framerate)"
]
},
{
"cell_type": "markdown",
"metadata": {
"id": "rRuFKD2ubPgu"
},
"source": [
"### Measuring values from `mjData`\n",
"As mentioned above, the `mjData` structure contains the dynamic variables and intermediate results produced by the simulation which are *expected to change* on each timestep. Below we simulate for 2000 timesteps and plot the angular velocity of the top and height of the stem as a function of time."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"id": "1XXB6asJoZ2N"
},
"outputs": [],
"source": [
"timevals = []\n",
"angular_velocity = []\n",
"stem_height = []\n",
"\n",
"# Simulate and save data\n",
"mujoco.mj_resetDataKeyframe(model, data, 0)\n",
"while data.time < duration:\n",
" mujoco.mj_step(model, data)\n",
" timevals.append(data.time)\n",
" angular_velocity.append(data.qvel[3:6].copy())\n",
" stem_height.append(data.geom_xpos[2,2]);\n",
"\n",
"dpi = 120\n",
"width = 600\n",
"height = 800\n",
"figsize = (width / dpi, height / dpi)\n",
"_, ax = plt.subplots(2, 1, figsize=figsize, dpi=dpi, sharex=True)\n",
"\n",
"ax[0].plot(timevals, angular_velocity)\n",
"ax[0].set_title('angular velocity')\n",
"ax[0].set_ylabel('radians / second')\n",
"\n",
"ax[1].plot(timevals, stem_height)\n",
"ax[1].set_xlabel('time (seconds)')\n",
"ax[1].set_ylabel('meters')\n",
"_ = ax[1].set_title('stem height')"
]
},
{
"cell_type": "markdown",
"metadata": {
"id": "u_zN8vATwcGy"
},
"source": [
"# Example: A chaotic pendulum"
]
},
{
"cell_type": "markdown",
"metadata": {
"id": "g1MKUEL_eSCM"
},
"source": [
"Below is a model of a chaotic pendulum, similar to [this one](https://www.exploratorium.edu/exhibits/chaotic-pendulum) in the San Francisco Exploratorium."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"id": "3jHYTV-bwfrS"
},
"outputs": [],
"source": [
"chaotic_pendulum = \"\"\"\n",
"\n",
"
\n",
" \n",
"
\n",
"\n",
" \n",
" \n",
" \n",
" \n",
"\n",
" \n",
" \n",
" \n",
" \n",
" \n",
" \n",
" \n",
" \n",
" \n",
" \n",
" \n",
" \n",
" \n",
" \n",
" \n",
" \n",
" \n",
" \n",
" \n",
" \n",
" \n",
"\n",
"\"\"\"\n",
"model = mujoco.MjModel.from_xml_string(chaotic_pendulum)\n",
"data = mujoco.MjData(model)\n",
"height = 480\n",
"width = 640\n",
"\n",
"with mujoco.Renderer(model, height, width) as renderer:\n",
" mujoco.mj_forward(model, data)\n",
" renderer.update_scene(data, camera=\"fixed\")\n",
"\n",
" media.show_image(renderer.render())"
]
},
{
"cell_type": "markdown",
"metadata": {
"id": "EKZrTBSS5f49"
},
"source": [
"## Timing\n",
"Let's see a video of it in action while we time the components:"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"id": "-kNWvE9dNwYW"
},
"outputs": [],
"source": [
"# setup\n",
"n_seconds = 6\n",
"framerate = 30 # Hz\n",
"n_frames = int(n_seconds * framerate)\n",
"frames = []\n",
"height = 240\n",
"width = 320\n",
"\n",
"# set initial state\n",
"mujoco.mj_resetData(model, data)\n",
"data.joint('root').qvel = 10\n",
"\n",
"# simulate and record frames\n",
"frame = 0\n",
"sim_time = 0\n",
"render_time = 0\n",
"n_steps = 0\n",
"with mujoco.Renderer(model, height, width) as renderer:\n",
" for i in range(n_frames):\n",
" while data.time * framerate < i:\n",
" tic = time.time()\n",
" mujoco.mj_step(model, data)\n",
" sim_time += time.time() - tic\n",
" n_steps += 1\n",
" tic = time.time()\n",
" renderer.update_scene(data, \"fixed\")\n",
" frame = renderer.render()\n",
" render_time += time.time() - tic\n",
" frames.append(frame)\n",
"\n",
"# print timing and play video\n",
"step_time = 1e6*sim_time/n_steps\n",
"step_fps = n_steps/sim_time\n",
"print(f'simulation: {step_time:5.3g} μs/step ({step_fps:5.0f}Hz)')\n",
"frame_time = 1e6*render_time/n_frames\n",
"frame_fps = n_frames/render_time\n",
"print(f'rendering: {frame_time:5.3g} μs/frame ({frame_fps:5.0f}Hz)')\n",
"print('\\n')\n",
"\n",
"# show video\n",
"media.show_video(frames, fps=framerate)"
]
},
{
"cell_type": "markdown",
"metadata": {
"id": "Iqi_m8HT-X5k"
},
"source": [
"Note that rendering is **much** slower than the simulated physics.\n",
"\n",
"## Chaos\n",
"This is a [chaotic](https://en.wikipedia.org/wiki/Chaos_theory) system (small pertubations in initial conditions accumulate quickly):"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"id": "Pa_19EfvOzzg"
},
"outputs": [],
"source": [
"PERTURBATION = 1e-7\n",
"SIM_DURATION = 10 # seconds\n",
"NUM_REPEATS = 8\n",
"\n",
"# preallocate\n",
"n_steps = int(SIM_DURATION / model.opt.timestep)\n",
"sim_time = np.zeros(n_steps)\n",
"angle = np.zeros(n_steps)\n",
"energy = np.zeros(n_steps)\n",
"\n",
"# prepare plotting axes\n",
"_, ax = plt.subplots(2, 1, figsize=(8, 6), sharex=True)\n",
"\n",
"# simulate NUM_REPEATS times with slightly different initial conditions\n",
"for _ in range(NUM_REPEATS):\n",
" # initialize\n",
" mujoco.mj_resetData(model, data)\n",
" data.qvel[0] = 10 # root joint velocity\n",
" # perturb initial velocities\n",
" data.qvel[:] += PERTURBATION * np.random.randn(model.nv)\n",
"\n",
" # simulate\n",
" for i in range(n_steps):\n",
" mujoco.mj_step(model, data)\n",
" sim_time[i] = data.time\n",
" angle[i] = data.joint('root').qpos\n",
" energy[i] = data.energy[0] + data.energy[1]\n",
"\n",
" # plot\n",
" ax[0].plot(sim_time, angle)\n",
" ax[1].plot(sim_time, energy)\n",
"\n",
"# finalize plot\n",
"ax[0].set_title('root angle')\n",
"ax[0].set_ylabel('radian')\n",
"ax[1].set_title('total energy')\n",
"ax[1].set_ylabel('Joule')\n",
"ax[1].set_xlabel('second')\n",
"plt.tight_layout()"
]
},
{
"cell_type": "markdown",
"metadata": {
"id": "daSIA_ewFGxV"
},
"source": [
"## Timestep and accuracy\n",
"**Question:** Why is the energy varying at all? There is no friction or damping, this system should conserve energy.\n",
"\n",
"**Answer:** Because of the discretization of time.\n",
"\n",
"If we decrease the timestep we'll get better accuracy and better energy conservation:"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"id": "4z-7KN_fFme-"
},
"outputs": [],
"source": [
"SIM_DURATION = 10 # (seconds)\n",
"TIMESTEPS = np.power(10, np.linspace(-2, -4, 5))\n",
"\n",
"# prepare plotting axes\n",
"_, ax = plt.subplots(1, 1)\n",
"\n",
"for dt in TIMESTEPS:\n",
" # set timestep, print\n",
" model.opt.timestep = dt\n",
"\n",
" # allocate\n",
" n_steps = int(SIM_DURATION / model.opt.timestep)\n",
" sim_time = np.zeros(n_steps)\n",
" energy = np.zeros(n_steps)\n",
"\n",
" # initialize\n",
" mujoco.mj_resetData(model, data)\n",
" data.qvel[0] = 9 # root joint velocity\n",
"\n",
" # simulate\n",
" print('{} steps at dt = {:2.2g}ms'.format(n_steps, 1000*dt))\n",
" for i in range(n_steps):\n",
" mujoco.mj_step(model, data)\n",
" sim_time[i] = data.time\n",
" energy[i] = data.energy[0] + data.energy[1]\n",
"\n",
" # plot\n",
" ax.plot(sim_time, energy, label='timestep = {:2.2g}ms'.format(1000*dt))\n",
"\n",
"# finalize plot\n",
"ax.set_title('energy')\n",
"ax.set_ylabel('Joule')\n",
"ax.set_xlabel('second')\n",
"ax.legend(frameon=True);\n",
"plt.tight_layout()"
]
},
{
"cell_type": "markdown",
"metadata": {
"id": "jsVkUm7QKb9I"
},
"source": [
"## Timestep and divergence\n",
"When we increase the time step, the simulation quickly diverges:"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"id": "FbdUA4zDPbDP"
},
"outputs": [],
"source": [
"SIM_DURATION = 10 # (seconds)\n",
"TIMESTEPS = np.power(10, np.linspace(-2, -1.5, 7))\n",
"\n",
"# get plotting axes\n",
"ax = plt.gca()\n",
"\n",
"for dt in TIMESTEPS:\n",
" # set timestep\n",
" model.opt.timestep = dt\n",
"\n",
" # allocate\n",
" n_steps = int(SIM_DURATION / model.opt.timestep)\n",
" sim_time = np.zeros(n_steps)\n",
" energy = np.zeros(n_steps) * np.nan\n",
" speed = np.zeros(n_steps) * np.nan\n",
"\n",
" # initialize\n",
" mujoco.mj_resetData(model, data)\n",
" data.qvel[0] = 11 # set root joint velocity\n",
"\n",
" # simulate\n",
" print('simulating {} steps at dt = {:2.2g}ms'.format(n_steps, 1000*dt))\n",
" for i in range(n_steps):\n",
" mujoco.mj_step(model, data)\n",
" if data.warning.number.any():\n",
" warning_index = np.nonzero(data.warning.number)[0][0]\n",
" warning = mujoco.mjtWarning(warning_index).name\n",
" print(f'stopped due to divergence ({warning}) at timestep {i}.\\n')\n",
" break\n",
" sim_time[i] = data.time\n",
" energy[i] = sum(abs(data.qvel))\n",
" speed[i] = np.linalg.norm(data.qvel)\n",
"\n",
" # plot\n",
" ax.plot(sim_time, energy, label='timestep = {:2.2g}ms'.format(1000*dt))\n",
" ax.set_yscale('log')\n",
"\n",
"# finalize plot\n",
"ax.set_ybound(1, 1e3)\n",
"ax.set_title('energy')\n",
"ax.set_ylabel('Joule')\n",
"ax.set_xlabel('second')\n",
"ax.legend(frameon=True, loc='lower right');\n",
"plt.tight_layout()"
]
},
{
"cell_type": "markdown",
"metadata": {
"id": "FITYfGyy3XPL"
},
"source": [
"# Contacts\n",
"\n",
"Let's go back to our box and sphere example and give it a free joint:"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"id": "2n1VNVv_FkbB"
},
"outputs": [],
"source": [
"free_body_MJCF = \"\"\"\n",
"\n",
" \n",
" \n",
" \n",
" \n",
"\n",
" \n",
" \n",
" \n",
" \n",
" \n",
" \n",
" \n",
" \n",
" \n",
" \n",
" \n",
"\n",
"\"\"\"\n",
"model = mujoco.MjModel.from_xml_string(free_body_MJCF)\n",
"data = mujoco.MjData(model)\n",
"height = 400\n",
"width = 600\n",
"\n",
"with mujoco.Renderer(model, height, width) as renderer:\n",
" mujoco.mj_forward(model, data)\n",
" renderer.update_scene(data, \"fixed\")\n",
"\n",
" media.show_image(renderer.render())"
]
},
{
"cell_type": "markdown",
"metadata": {
"id": "Z2amdQCn8REu"
},
"source": [
"Let render this body rolling on the floor, in slow-motion, while visualizing contact points and forces:"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"id": "HlRhFs_d3WLP"
},
"outputs": [],
"source": [
"n_frames = 200\n",
"height = 240\n",
"width = 320\n",
"frames = []\n",
"\n",
"# visualize contact frames and forces, make body transparent\n",
"options = mujoco.MjvOption()\n",
"mujoco.mjv_defaultOption(options)\n",
"options.flags[mujoco.mjtVisFlag.mjVIS_CONTACTPOINT] = True\n",
"options.flags[mujoco.mjtVisFlag.mjVIS_CONTACTFORCE] = True\n",
"options.flags[mujoco.mjtVisFlag.mjVIS_TRANSPARENT] = True\n",
"\n",
"# tweak scales of contact visualization elements\n",
"model.vis.scale.contactwidth = 0.1\n",
"model.vis.scale.contactheight = 0.03\n",
"model.vis.scale.forcewidth = 0.05\n",
"model.vis.map.force = 0.3\n",
"\n",
"# random initial rotational velocity:\n",
"mujoco.mj_resetData(model, data)\n",
"data.qvel[3:6] = 5*np.random.randn(3)\n",
"\n",
"# Simulate and display video.\n",
"with mujoco.Renderer(model, height, width) as renderer:\n",
" for i in range(n_frames):\n",
" while data.time < i/120.0: #1/4x real time\n",
" mujoco.mj_step(model, data)\n",
" renderer.update_scene(data, \"track\", options)\n",
" frame = renderer.render()\n",
" frames.append(frame)\n",
"\n",
"media.show_video(frames, fps=30)"
]
},
{
"cell_type": "markdown",
"metadata": {
"id": "_181TbtVSMBl"
},
"source": [
"## Analysis of contact forces\n",
"\n",
"Let's rerun the above simulation (with a different random initial condition) and\n",
"plot some values related to the contacts"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"id": "BMqyWeHki8Eg"
},
"outputs": [],
"source": [
"n_steps = 499\n",
"\n",
"# allocate\n",
"sim_time = np.zeros(n_steps)\n",
"ncon = np.zeros(n_steps)\n",
"force = np.zeros((n_steps,3))\n",
"velocity = np.zeros((n_steps, model.nv))\n",
"penetration = np.zeros(n_steps)\n",
"acceleration = np.zeros((n_steps, model.nv))\n",
"forcetorque = np.zeros(6)\n",
"\n",
"# random initial rotational velocity:\n",
"mujoco.mj_resetData(model, data)\n",
"data.qvel[3:6] = 2*np.random.randn(3)\n",
"\n",
"# simulate and save data\n",
"for i in range(n_steps):\n",
" mujoco.mj_step(model, data)\n",
" sim_time[i] = data.time\n",
" ncon[i] = data.ncon\n",
" velocity[i] = data.qvel[:]\n",
" acceleration[i] = data.qacc[:]\n",
" # iterate over active contacts, save force and distance\n",
" for j,c in enumerate(data.contact):\n",
" mujoco.mj_contactForce(model, data, j, forcetorque)\n",
" force[i] += forcetorque[0:3]\n",
" penetration[i] = min(penetration[i], c.dist)\n",
" # we could also do\n",
" # force[i] += data.qfrc_constraint[0:3]\n",
" # do you see why?\n",
"\n",
"# plot\n",
"_, ax = plt.subplots(3, 2, sharex=True, figsize=(10, 10))\n",
"\n",
"lines = ax[0,0].plot(sim_time, force)\n",
"ax[0,0].set_title('contact force')\n",
"ax[0,0].set_ylabel('Newton')\n",
"ax[0,0].legend(lines, ('normal z', 'friction x', 'friction y'));\n",
"\n",
"ax[1,0].plot(sim_time, acceleration)\n",
"ax[1,0].set_title('acceleration')\n",
"ax[1,0].set_ylabel('(meter,radian)/s/s')\n",
"ax[1,0].legend(['ax', 'ay', 'az', 'αx', 'αy', 'αz'])\n",
"\n",
"ax[2,0].plot(sim_time, velocity)\n",
"ax[2,0].set_title('velocity')\n",
"ax[2,0].set_ylabel('(meter,radian)/s')\n",
"ax[2,0].set_xlabel('second')\n",
"ax[2,0].legend(['vx', 'vy', 'vz', 'ωx', 'ωy', 'ωz'])\n",
"\n",
"ax[0,1].plot(sim_time, ncon)\n",
"ax[0,1].set_title('number of contacts')\n",
"ax[0,1].set_yticks(range(6))\n",
"\n",
"ax[1,1].plot(sim_time, force[:,0])\n",
"ax[1,1].set_yscale('log')\n",
"ax[1,1].set_title('normal (z) force - log scale')\n",
"ax[1,1].set_ylabel('Newton')\n",
"z_gravity = -model.opt.gravity[2]\n",
"mg = model.body(\"box_and_sphere\").mass[0] * z_gravity\n",
"mg_line = ax[1,1].plot(sim_time, np.ones(n_steps)*mg, label='m*g', linewidth=1)\n",
"ax[1,1].legend()\n",
"\n",
"ax[2,1].plot(sim_time, 1000*penetration)\n",
"ax[2,1].set_title('penetration depth')\n",
"ax[2,1].set_ylabel('millimeter')\n",
"ax[2,1].set_xlabel('second')\n",
"\n",
"plt.tight_layout()"
]
},
{
"cell_type": "markdown",
"metadata": {
"id": "zV5PkYzFXu42"
},
"source": [
"## Friction\n",
"\n",
"Let's see the effect of changing friction values"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"id": "2R_gKoYyXwda"
},
"outputs": [],
"source": [
"MJCF = \"\"\"\n",
"\n",
" \n",
" \n",
" \n",
" \n",
" \n",
"\n",
" \n",
" \n",
" \n",
" \n",
"\n",
" \n",
" \n",
" \n",
" \n",
" \n",
" \n",
" \n",
" \n",
" \n",
" \n",
" \n",
" \n",
" \n",
"\n",
"\n",
"\"\"\"\n",
"n_frames = 60\n",
"height = 300\n",
"width = 300\n",
"frames = []\n",
"\n",
"# load\n",
"model = mujoco.MjModel.from_xml_string(MJCF)\n",
"data = mujoco.MjData(model)\n",
"\n",
"# Simulate and display video.\n",
"with mujoco.Renderer(model, height, width) as renderer:\n",
" mujoco.mj_resetData(model, data)\n",
" for i in range(n_frames):\n",
" while data.time < i/30.0:\n",
" mujoco.mj_step(model, data)\n",
" renderer.update_scene(data, \"y\")\n",
" frame = renderer.render()\n",
" frames.append(frame)\n",
"\n",
"media.show_video(frames, fps=30)"
]
},
{
"cell_type": "markdown",
"metadata": {
"id": "ArmmaPqGP6W7"
},
"source": [
"# Tendons, actuators and sensors"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"id": "VJz84c97c8Df"
},
"outputs": [],
"source": [
"MJCF = \"\"\"\n",
"\n",
" \n",
" \n",
" \n",
" \n",
"\n",
" \n",
" \n",
" \n",
" \n",
" \n",
"\n",
" \n",
" \n",
" \n",
" \n",
" \n",
"\n",
" \n",
" \n",
" \n",
" \n",
" \n",
" \n",
" \n",
" \n",
"\n",
" \n",
" \n",
" \n",
" \n",
" \n",
" \n",
"\n",
" \n",
" \n",
" \n",
"\n",
" \n",
" \n",
" \n",
"\n",
"\"\"\"\n",
"model = mujoco.MjModel.from_xml_string(MJCF)\n",
"data = mujoco.MjData(model)\n",
"height = 480\n",
"width = 480\n",
"\n",
"with mujoco.Renderer(model, height, width) as renderer:\n",
" mujoco.mj_forward(model, data)\n",
" renderer.update_scene(data, \"fixed\")\n",
"\n",
" media.show_image(renderer.render())"
]
},
{
"cell_type": "markdown",
"metadata": {
"id": "u8z2vrOr_RVD"
},
"source": [
"actuated bat and passive \"piñata\":"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"id": "z-zoBCuBv2Xi"
},
"outputs": [],
"source": [
"n_frames = 180\n",
"height = 240\n",
"width = 320\n",
"frames = []\n",
"fps = 60.0\n",
"times = []\n",
"sensordata = []\n",
"\n",
"# constant actuator signal\n",
"mujoco.mj_resetData(model, data)\n",
"data.ctrl = 20\n",
"\n",
"# Simulate and display video.\n",
"with mujoco.Renderer(model, height, width) as renderer:\n",
" for i in range(n_frames):\n",
" while data.time < i/fps:\n",
" mujoco.mj_step(model, data)\n",
" times.append(data.time)\n",
" sensordata.append(data.sensor('accelerometer').data.copy())\n",
" renderer.update_scene(data, \"fixed\")\n",
" frame = renderer.render()\n",
" frames.append(frame)\n",
"\n",
"media.show_video(frames, fps=fps)"
]
},
{
"cell_type": "markdown",
"metadata": {
"id": "gwHMy_iRA7Jh"
},
"source": [
"Let's plot the values measured by our accelerometer sensor:"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"id": "uy4wSEMAAJgn"
},
"outputs": [],
"source": [
"ax = plt.gca()\n",
"\n",
"ax.plot(np.asarray(times), np.asarray(sensordata), label=[f\"axis {v}\" for v in ['x', 'y', 'z']])\n",
"\n",
"# finalize plot\n",
"ax.set_title('Accelerometer values')\n",
"ax.set_ylabel('meter/second^2')\n",
"ax.set_xlabel('second')\n",
"ax.legend(frameon=True, loc='lower right')\n",
"plt.tight_layout()"
]
},
{
"cell_type": "markdown",
"metadata": {
"id": "0YKSTtJ_BQ7x"
},
"source": [
"Note how the moments when the body is hit by the bat are clearly visible in the accelerometer measurements."
]
},
{
"cell_type": "markdown",
"metadata": {
"id": "1kOs1wTc7uCZ"
},
"source": [
"# Advanced rendering\n",
"\n",
"Like joint visualization, additional rendering options are exposed as parameters to the `render` method.\n",
"\n",
"Let's bring back our first model:"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"id": "mTDgsk2xcgwH"
},
"outputs": [],
"source": [
"xml = \"\"\"\n",
"\n",
" \n",
" \n",
" \n",
" \n",
" \n",
" \n",
" \n",
" \n",
"\n",
"\"\"\"\n",
"model = mujoco.MjModel.from_xml_string(xml)\n",
"data = mujoco.MjData(model)\n",
"\n",
"with mujoco.Renderer(model) as renderer:\n",
" mujoco.mj_forward(model, data)\n",
" renderer.update_scene(data)\n",
" media.show_image(renderer.render())"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"id": "VePXamL_6XUc"
},
"outputs": [],
"source": [
"#@title Enable transparency and frame visualization {vertical-output: true}\n",
"\n",
"scene_option.frame = mujoco.mjtFrame.mjFRAME_GEOM\n",
"scene_option.flags[mujoco.mjtVisFlag.mjVIS_TRANSPARENT] = True\n",
"with mujoco.Renderer(model) as renderer:\n",
" renderer.update_scene(data, scene_option=scene_option)\n",
" frame = renderer.render()\n",
" media.show_image(frame)"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"id": "PVcpcvww9lZ8"
},
"outputs": [],
"source": [
"#@title Depth rendering {vertical-output: true}\n",
"\n",
"with mujoco.Renderer(model) as renderer:\n",
" # update renderer to render depth\n",
" renderer.enable_depth_rendering()\n",
"\n",
" # reset the scene\n",
" renderer.update_scene(data)\n",
"\n",
" # depth is a float array, in meters.\n",
" depth = renderer.render()\n",
"\n",
" # Shift nearest values to the origin.\n",
" depth -= depth.min()\n",
" # Scale by 2 mean distances of near rays.\n",
" depth /= 2*depth[depth <= 1].mean()\n",
" # Scale to [0, 255]\n",
" pixels = 255*np.clip(depth, 0, 1)\n",
"\n",
" media.show_image(pixels.astype(np.uint8))"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"id": "PNwiIrgpx7T8"
},
"outputs": [],
"source": [
"#@title Segmentation rendering {vertical-output: true}\n",
"\n",
"with mujoco.Renderer(model) as renderer:\n",
" renderer.disable_depth_rendering()\n",
"\n",
" # update renderer to render segmentation\n",
" renderer.enable_segmentation_rendering()\n",
"\n",
" # reset the scene\n",
" renderer.update_scene(data)\n",
"\n",
" seg = renderer.render()\n",
"\n",
" # Display the contents of the first channel, which contains object\n",
" # IDs. The second channel, seg[:, :, 1], contains object types.\n",
" geom_ids = seg[:, :, 0]\n",
" # Infinity is mapped to -1\n",
" geom_ids = geom_ids.astype(np.float64) + 1\n",
" # Scale to [0, 1]\n",
" geom_ids = geom_ids / geom_ids.max()\n",
" pixels = 255*geom_ids\n",
" media.show_image(pixels.astype(np.uint8))"
]
},
{
"cell_type": "markdown",
"metadata": {
"id": "wo72mo0mGIXr"
},
"source": [
"## The camera matrix\n",
"\n",
"For a description of the camera matrix see the article [Camera matrix](https://en.wikipedia.org/wiki/Camera_matrix) on Wikipedia."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"id": "sDYwClpxaxab"
},
"outputs": [],
"source": [
"def compute_camera_matrix(renderer, data):\n",
" \"\"\"Returns the 3x4 camera matrix.\"\"\"\n",
" # If the camera is a 'free' camera, we get its position and orientation\n",
" # from the scene data structure. It is a stereo camera, so we average over\n",
" # the left and right channels. Note: we call `self.update()` in order to\n",
" # ensure that the contents of `scene.camera` are correct.\n",
" renderer.update_scene(data)\n",
" pos = np.mean([camera.pos for camera in renderer.scene.camera], axis=0)\n",
" z = -np.mean([camera.forward for camera in renderer.scene.camera], axis=0)\n",
" y = np.mean([camera.up for camera in renderer.scene.camera], axis=0)\n",
" rot = np.vstack((np.cross(y, z), y, z))\n",
" fov = model.vis.global_.fovy\n",
"\n",
" # Translation matrix (4x4).\n",
" translation = np.eye(4)\n",
" translation[0:3, 3] = -pos\n",
"\n",
" # Rotation matrix (4x4).\n",
" rotation = np.eye(4)\n",
" rotation[0:3, 0:3] = rot\n",
"\n",
" # Focal transformation matrix (3x4).\n",
" focal_scaling = (1./np.tan(np.deg2rad(fov)/2)) * renderer.height / 2.0\n",
" focal = np.diag([-focal_scaling, focal_scaling, 1.0, 0])[0:3, :]\n",
"\n",
" # Image matrix (3x3).\n",
" image = np.eye(3)\n",
" image[0, 2] = (renderer.width - 1) / 2.0\n",
" image[1, 2] = (renderer.height - 1) / 2.0\n",
" return image @ focal @ rotation @ translation"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"id": "My0N4_7PDJ_q"
},
"outputs": [],
"source": [
"#@title Project from world to camera coordinates {vertical-output: true}\n",
"\n",
"with mujoco.Renderer(model) as renderer:\n",
" renderer.disable_segmentation_rendering()\n",
" # reset the scene\n",
" renderer.update_scene(data)\n",
"\n",
" # Get the world coordinates of the box corners\n",
" box_pos = data.geom_xpos[model.geom('red_box').id]\n",
" box_mat = data.geom_xmat[model.geom('red_box').id].reshape(3, 3)\n",
" box_size = model.geom_size[model.geom('red_box').id]\n",
" offsets = np.array([-1, 1]) * box_size[:, None]\n",
" xyz_local = np.stack(list(itertools.product(*offsets))).T\n",
" xyz_global = box_pos[:, None] + box_mat @ xyz_local\n",
"\n",
" # Camera matrices multiply homogenous [x, y, z, 1] vectors.\n",
" corners_homogeneous = np.ones((4, xyz_global.shape[1]), dtype=float)\n",
" corners_homogeneous[:3, :] = xyz_global\n",
"\n",
" # Get the camera matrix.\n",
" m = compute_camera_matrix(renderer, data)\n",
"\n",
" # Project world coordinates into pixel space. See:\n",
" # https://en.wikipedia.org/wiki/3D_projection#Mathematical_formula\n",
" xs, ys, s = m @ corners_homogeneous\n",
" # x and y are in the pixel coordinate system.\n",
" x = xs / s\n",
" y = ys / s\n",
"\n",
" # Render the camera view and overlay the projected corner coordinates.\n",
" pixels = renderer.render()\n",
" fig, ax = plt.subplots(1, 1)\n",
" ax.imshow(pixels)\n",
" ax.plot(x, y, '+', c='w')\n",
" ax.set_axis_off()"
]
},
{
"cell_type": "markdown",
"metadata": {
"id": "AGm5-e0sHEAF"
},
"source": [
"## Modifying the scene\n",
"\n",
"Let's add some arbitrary geometry to the `mjvScene`."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"id": "Z6NDYJ8IOVt7"
},
"outputs": [],
"source": [
"def get_geom_speed(model, data, geom_name):\n",
" \"\"\"Returns the speed of a geom.\"\"\"\n",
" geom_vel = np.zeros(6)\n",
" geom_type = mujoco.mjtObj.mjOBJ_GEOM\n",
" geom_id = data.geom(geom_name).id\n",
" mujoco.mj_objectVelocity(model, data, geom_type, geom_id, geom_vel, 0)\n",
" return np.linalg.norm(geom_vel)\n",
"\n",
"def add_visual_capsule(scene, point1, point2, radius, rgba):\n",
" \"\"\"Adds one capsule to an mjvScene.\"\"\"\n",
" if scene.ngeom >= scene.maxgeom:\n",
" return\n",
" scene.ngeom += 1 # increment ngeom\n",
" # initialise a new capsule, add it to the scene using mjv_connector\n",
" mujoco.mjv_initGeom(scene.geoms[scene.ngeom-1],\n",
" mujoco.mjtGeom.mjGEOM_CAPSULE, np.zeros(3),\n",
" np.zeros(3), np.zeros(9), rgba.astype(np.float32))\n",
" mujoco.mjv_connector(scene.geoms[scene.ngeom-1],\n",
" mujoco.mjtGeom.mjGEOM_CAPSULE, radius,\n",
" point1, point2)\n",
"\n",
" # traces of time, position and speed\n",
"times = []\n",
"positions = []\n",
"speeds = []\n",
"offset = model.jnt_axis[0]/16 # offset along the joint axis\n",
"\n",
"def modify_scene(scn):\n",
" \"\"\"Draw position trace, speed modifies width and colors.\"\"\"\n",
" if len(positions) > 1:\n",
" for i in range(len(positions)-1):\n",
" rgba=np.array((np.clip(speeds[i]/10, 0, 1),\n",
" np.clip(1-speeds[i]/10, 0, 1),\n",
" .5, 1.))\n",
" radius=.003*(1+speeds[i])\n",
" point1 = positions[i] + offset*times[i]\n",
" point2 = positions[i+1] + offset*times[i+1]\n",
" add_visual_capsule(scn, point1, point2, radius, rgba)\n",
"\n",
"duration = 6 # (seconds)\n",
"framerate = 30 # (Hz)\n",
"\n",
"# Simulate and display video.\n",
"frames = []\n",
"\n",
"# Reset state and time.\n",
"mujoco.mj_resetData(model, data)\n",
"mujoco.mj_forward(model, data)\n",
"\n",
"with mujoco.Renderer(model) as renderer:\n",
" while data.time < duration:\n",
" # append data to the traces\n",
" positions.append(data.geom_xpos[data.geom(\"green_sphere\").id].copy())\n",
" times.append(data.time)\n",
" speeds.append(get_geom_speed(model, data, \"green_sphere\"))\n",
" mujoco.mj_step(model, data)\n",
" if len(frames) < data.time * framerate:\n",
" renderer.update_scene(data)\n",
" modify_scene(renderer.scene)\n",
" pixels = renderer.render()\n",
" frames.append(pixels)\n",
"\n",
"media.show_video(frames, fps=framerate)"
]
},
{
"cell_type": "markdown",
"metadata": {
"id": "p6wHrjRg1EGF"
},
"source": [
"## Multiple frames in the same scene\n",
"\n",
"Sometimes one would like to draw the same geometry multiple times, for example when a model is tracking states from motion-capture, it's nice to have the data\n",
"visualized next to the model. Unlike `mjv_updateScene` (called by the `Renderer`'s `update_scene` method) which clears the scene at every call, `mjv_addGeoms` will add visual geoms to an existing scene:"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"id": "T4b_8n6t1ASk"
},
"outputs": [],
"source": [
"# Get MuJoCo's standard humanoid model.\n",
"print('Getting MuJoCo humanoid XML description from GitHub:')\n",
"!git clone https://github.com/google-deepmind/mujoco\n",
"with open('mujoco/model/humanoid/humanoid.xml', 'r') as f:\n",
" xml = f.read()\n",
"\n",
"# Load the model, make two MjData's.\n",
"model = mujoco.MjModel.from_xml_string(xml)\n",
"data = mujoco.MjData(model)\n",
"data2 = mujoco.MjData(model)\n",
"\n",
"# Episode parameters.\n",
"duration = 3 # (seconds)\n",
"framerate = 60 # (Hz)\n",
"data.qpos[0:2] = [-.5, -.5] # Initial x-y position (m)\n",
"data.qvel[2] = 4 # Initial vertical velocity (m/s)\n",
"ctrl_phase = 2 * np.pi * np.random.rand(model.nu) # Control phase\n",
"ctrl_freq = 1 # Control frequency\n",
"\n",
"# Visual options for the \"ghost\" model.\n",
"vopt2 = mujoco.MjvOption()\n",
"vopt2.flags[mujoco.mjtVisFlag.mjVIS_TRANSPARENT] = True # Transparent.\n",
"pert = mujoco.MjvPerturb() # Empty MjvPerturb object\n",
"# We only want dynamic objects (the humanoid). Static objects (the floor)\n",
"# should not be re-drawn. The mjtCatBit flag lets us do that, though we could\n",
"# equivalently use mjtVisFlag.mjVIS_STATIC\n",
"catmask = mujoco.mjtCatBit.mjCAT_DYNAMIC\n",
"\n",
"# Simulate and render.\n",
"frames = []\n",
"with mujoco.Renderer(model, 480, 640) as renderer:\n",
" while data.time < duration:\n",
" # Sinusoidal control signal.\n",
" data.ctrl = np.sin(ctrl_phase + 2 * np.pi * data.time * ctrl_freq)\n",
" mujoco.mj_step(model, data)\n",
" if len(frames) < data.time * framerate:\n",
" # This draws the regular humanoid from `data`.\n",
" renderer.update_scene(data)\n",
"\n",
" # Copy qpos to data2, move the humanoid sideways, call mj_forward.\n",
" data2.qpos = data.qpos\n",
" data2.qpos[0] += 1.5\n",
" data2.qpos[1] += 1\n",
" mujoco.mj_forward(model, data2)\n",
"\n",
" # Call mjv_addGeoms to add the ghost humanoid to the scene.\n",
" mujoco.mjv_addGeoms(model, data2, vopt2, pert, catmask, renderer.scene)\n",
"\n",
" # Render and add the frame.\n",
" pixels = renderer.render()\n",
" frames.append(pixels)\n",
"\n",
"# Render video at half real-time.\n",
"media.show_video(frames, fps=framerate/2)"
]
},
{
"cell_type": "markdown",
"metadata": {
"id": "Zzzugf-qPExb"
},
"source": [
"## Camera control\n",
"\n",
"Cameras can be controlled dynamically in order to achieve cinematic effects. Run the three cells below to see the difference between rendering from a static and moving camera.\n",
"\n",
"The camera-control code smoothly transitions between two trajectories, one orbiting a fixed point, the other tracking a moving object. Parameter values in the code were obtained by iterating quickly on low-res videos."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"cellView": "form",
"id": "-SW-K9WuPGrp"
},
"outputs": [],
"source": [
"#@title Load the \"dominos\" model\n",
"\n",
"dominos_xml = \"\"\"\n",
"\n",
" \n",
" \n",
" \n",
" \n",
" \n",
"\n",
" \n",
"\n",
" \n",
" \n",
" \n",
" \n",
"\n",
" \n",
" \n",
" \n",
" \n",
" \n",
" \n",
"\n",
"