Extension - Rendering: Basic opengl, offscreen (headless), and interface to physics-based rendering

The view is realized using glfw and OpenGL shaders. It is straight-forward to grab the image (and, e.g., make a video from such images), as well as the depth (e.g. to simulate a depth camera).

However, grabbing from the actual view window is brittle, as it depends on the window manager and might give weird results if the window is not visible. The package provides an offscreen (headless) implementation of the OpenGL rendering, which allows to compute images and depth (also object segmentation labels) without window or X.

The simple OpenGL rendering is ok for depth camera simulation, but not visually realistic. An interface to Nvidia’s NVISSI physics-based rendering enables very flexible realistic rendering.

Meshes, Densities, Implicit Surfaces, Point Cloud, Normals

[ ]:
import robotic as ry
import numpy as np

X = np.random.rand(3,3,3)
Y = np.random.rand(30,30,30)
Z = np.random.rand(100,3)
col = 255.*np.random.rand(100,3)

C = ry.Config()
C.addFrame('mesh') .setMeshFile(ry.raiPath('g1/meshes/head_link.h5')) .setPosition([0,0,.7])
C.addFrame('tensor') .setTensorShape(0.01*X, [1.,1.,1.]) .setPosition([0,0,.5])
C.addFrame('implicit') .setImplicitSurface(Y-0.5, [1.,1.,1.], 8) .setPosition([1,0,.5])
C.addFrame('lines') .setLines(Z-.5, col) .setPosition([-1,0,.5])
C.addFrame('points') .setPointCloud(Z-.5, col) .setPosition([-1,0,.5])
C.view()

Note that the points of the point cloud coincide with the line vertices. And the random colors per vertex are being interpolated by the lines.

The setMeshFile works for any mesh file format supported by lib assimp. The .h5-file used here (use ry-h5info on command line to inspect .h5-files) trivially stores meshes as vertex, triangle, and color arrays.

Modifying the shader

For those knowing about about shaders: Setting the userShaderFiles parameter, the code uses local vertex and fragment shader files which you can modify arbitrarily.

Basic Event Handling

Grabbing images and depth from the view window

It’s simple to grab the rgb and depth buffers from the underlying opengl:

[ ]:
import robotic as ry
import numpy as np
import matplotlib.pyplot as plt
import time
print(ry.__version__, ry.compiled())
[ ]:
C = ry.Config()
C.addFile(ry.raiPath('scenarios/pandaSingle.g'))
C.view()
time.sleep(.5)
[ ]:
rgb = C.viewer().getRgb()
depth = C.viewer().getDepth()
print(rgb.shape, depth.shape)
[ ]:
fig = plt.figure()
fig.add_subplot(1,2,1)
plt.imshow(rgb)
fig.add_subplot(1,2,2)
plt.imshow(depth)
plt.show()

Setting/initializing camera pose and specs

The camera pose can be set to be at a particular frame. (Create a dedicated frame, if you want freedom to set the camera). Also camera intrinsics (focalLength, width, height) can be set with frame attributes. Our scene has a cameraTop predefined:

[ ]:
f = C.getFrame('cameraTop')
f.setAttributes({'focalLength': .5, 'width': 500, 'height': 500, 'zRange': [.01, 10.]})
C.viewer().setCamera(f)

When a configuration has a camera_init frame (e.g. defined in a file), then the camera is initialized to its pose and specs at the first creation of a view. When a configuration has a camera_track frame, then camera is set to its pose and specs at each update of the view. To set the default camera:

[ ]:
C.viewer().setCamera(None)

Saving pngs for video generation

There might be many better ways to generate a video. But a simple one it to write many numered pngs and call ffmpeg on shell. For instance:

[ ]:
# first ensure a folder
import os
os.system('mkdir -p z.vid')
[ ]:
q = C.getJointState()

for t in range(30):
    q = q + .02
    C.setJointState(q)
    C.view()
    C.viewer().savePng('z.vid/')
[ ]:
os.system('ffmpeg -v 16 -f image2 -framerate 12 -i z.vid/%04d.png -c:v libx264 -r 25 -pix_fmt yuv420p -y vid.mp4')

…which creates an mp4 with 12 images per second.

Offscreen (headless) CameraView

But this way of grabbing images and depth from the view window is brittle (as it depends on the window manager) and also includes markers and non-visual objects. A much better way is to use offscreen method:

[ ]:
cam = ry.CameraView(C)
[ ]:
rgb, depth = cam.computeImageAndDepth(C)
[ ]:
fig = plt.figure()
fig.add_subplot(1,2,1)
plt.imshow(rgb)
fig.add_subplot(1,2,2)
plt.imshow(depth)
plt.show()
[ ]:
cam.setCamera(C.getFrame('cameraTop')) # cameraWrist is another pre-defined frame
[ ]:
rgb, depth = cam.computeImageAndDepth(C)
[ ]:
fig = plt.figure()
fig.add_subplot(1,2,1)
plt.imshow(rgb)
fig.add_subplot(1,2,2)
plt.imshow(depth)
plt.show()

Point Cloud Rendering

[ ]:
import robotic as ry
import matplotlib.pyplot as plt
[ ]:
C = ry.Config()
C.addFile(ry.raiPath('scenarios/pandaSingle.g'))
C.view()
[ ]:
cam = ry.CameraView(C)
cam.setCamera(C.getFrame('cameraTop'))
rgb, depth = cam.computeImageAndDepth(C)
pcl = ry.depthImage2PointCloud(depth, cam.getFxycxy())
[ ]:
print(rgb.shape, depth.shape, pcl.shape)
print(C.viewer().getCamera_fxycxy())
[ ]:
fig = plt.figure()
fig.add_subplot(1,2,1)
plt.imshow(rgb)
fig.add_subplot(1,2,2)
plt.imshow(depth)
plt.show()
[ ]:
f = C.addFrame('pcl', 'cameraTop')
f.setPointCloud(pcl, [255,0,0])
C.view()

Physics-based Rendering using Nvidia NVISII

[ ]:
# import robotic as ry
# from matplotlib import pyplot as plt
# from robotic import render
# print('version', ry.compiled())
[ ]:
# C = ry.Config()
# C.addFile(ry.raiPath('scenarios/pandaSingle.g'))
# C.view()
[ ]:
# C.addFrame('light1') .setPosition([1,0,2]) .setShape(ry.ST.ssBox, [.5,2,.1,.02]) .setColor([.5]) .setAttribute('temperature', 6000)
# C.addFrame('light2') .setPosition([-1,0,2]) .setShape(ry.ST.ssBox, [.5,2,.1,.02]) .setColor([.8]) .setAttribute('temperature', 6000)
# C.view()
[ ]:
# nv = render.NvisiiRenderer(600, 400, C.view_focalLength())
# nv.addConfig(C)
[ ]:
# nv.setCamera(C)
# rgb = nv.render(256)
# plt.imshow(rgb)
# plt.show()
[ ]:
# del nv
[ ]: