BotOp-2: Real robot operation checklist & first steps

The BotOp interface should work equally for a simulated and real robot. Note that currently only a locally compiled robotic package supports connecting to a Franka robot and Realsense camera. Here some first cautious steps to get started with a real robot.

Basic Checklist for operating a Franka

For a full franka setup description see https://frankaemika.github.io/docs/getting_started.html. Here just a minimal checklist for operating a Franka:

  • Is the a real time kernel running? Check uname -r. The kernel name should end with -rt*

  • Is the user in dialout and realtime group? Check with groups. If not, add with sudo usermod -a -G realtime <username> and sudo usermod -a -G dialout <username>, and log out and in (or reboot) and check again.

  • Is the robot ready? E.g. at https://172.16.0.2/desk/. Are joints unlocked and status Ready?

  • Test the console tool ry-bot -real -up -home. (See ry-bot -h for other test commands, e.g. homing.) Potential issues:

    • ry-bot: command not found. Is it located in ~/.local/bin? Do you have export PATH="$HOME/.local/bin:$PATH" in your .bashrc?

    • ‘FCI mode is not activated’. Activate it in the Desk: Older versions: side panel, newer versions: hidden in a top menu.

    • Stalls when connecting to Franka - are the Franka IP addresses ({“172.16.0.2”, “172.17.0.2”} in franka.cpp) correct?

    • Franka not Ready or can’t move - check as above?

    • Error message “== ERROR:franka.cpp:init:356(-2) not implemented with this compiler options”. Do you use the pypi package instead of a locally compiled package? Did you enable USE_LIBFRANKA with cmake? Did you make install?

    • Error message roughly like ‘Franka uses wrong library version’. Do you have an old franka and need to install version 0.7.1 of the franka lib? (See ./install.sh -v 0.7.1 libfranka in the README of robotic.)

    • When executing the cell below, check whether print(ry.compiled()) prints the correct compile time.

Finally, the ry-bot (usually installed in ~/.local/bin) provides a minimalistic example for how to run the robot - have a look!

First robot motion, camera & pcl, and gripper motion

[ ]:
import robotic as ry
import numpy as np
import time
print('robotic version:', ry.__version__, ry.compiled())

Also for a real robot we first load the configuration and maintain our own workspace configuration.

[ ]:
C = ry.Config()
C.addFile(ry.raiPath('../rai-robotModels/scenarios/pandaSingle.g'))
pcl = C.addFrame('pcl', 'cameraWrist')
C.view(False, 'this is your workspace data structure C -- NOT THE SIMULTATION')

Now we start botop with real=True. By passing the model configuration C, the system knows which and how many robots there should be an tries to connect with them.

[ ]:
# True = real robot!!
bot = ry.BotOp(C, useRealRobot=True)

If that failed, you could ry.params_print() to see which global parameters were used and whether you should change them. See also the checklist above. Or continue with sim from here one:

[ ]:
if bot.get_t()==0: #if the above failed, use a sim...
    del bot
    bot = ry.BotOp(C, useRealRobot=False)

A first motion:

[ ]:
q = bot.get_qHome()
q[1] = q[1] + .2

bot.moveTo(q)
bot.wait(C)
print('first motion done')

bot.moveTo(bot.get_qHome())
bot.wait(C)

Grabbing a camera image & pcl, adding the pcl to the workspace C:

[ ]:
pcl = C.getFrame("pcl")
pcl.setShape(ry.ST.pointCloud, [2]) # the size here is pixel size for display
bot.sync(C)

count = 0

while bot.getKeyPressed()!=ord('q'):
    image, depth, points = bot.getImageDepthPcl("cameraWrist")
    pcl.setPointCloud(points, image)
    pcl.setColor([1,0,0])
    bot.sync(C, .1)

    if bot.getTimeToEnd()<=0.:
        bot.moveTo(q)
        bot.moveTo(bot.get_qHome())
        count = count + 1
    if count>=3:
        break

Closing & opening the gripper:

[ ]:
#slow close
bot.gripperMove(ry._left, width=.0, speed=.1)

while not bot.gripperDone(ry._left):
    bot.sync(C)

#fast open
bot.gripperMove(ry._left, width=.08, speed=1.)

while not bot.gripperDone(ry._left):
    bot.sync(C)

Always shut down the robot properly by destroying the handle:

[ ]:
del bot
del C

Advanced: Compliance & Force/Torque feedback

[ ]:
import robotic as ry
import numpy as np
import time
[ ]:
C = ry.Config()
C.addFile(ry.raiPath('scenarios/pandaSingle.g'))
C.view(False, 'this is your workspace data structure C -- NOT THE SIMULTATION')
[ ]:
# True = real robot!!
bot = ry.BotOp(C, True)

After opening the robot, it is holding its position. Try moving it you can feel the gains.

[ ]:
C.view(True, 'floating=False, damping=True -- Try moving the robot by hand!\n-- press any key to continue --')

We can let it float (=setting the reference always to q_current) and turn off the damping, which makes the robot move more freely:

[ ]:
bot.hold(floating=True, damping=False)
C.view(True, 'floating=True, damping=False -- Try moving the robot by hand!\n-- press any key to continue --')

We can also float with daming:

[ ]:
bot.hold(floating=True, damping=True)
C.view(True, 'floating=True, damping=True -- Try moving the robot by hand!\n-- press any key to continue --')

The hold methods above might be useful for kinestetic teaching or so (you can always keep C sync’ed and compute any features while moving the robot).

But for autonomous compliant manipulation we want to follow a reference and impose compliance in following this reference along some task space dimensions only. I.e., a task space compliance.

[ ]:
bot.moveTo(bot.get_qHome(), 1.)
bot.wait(C)

while bot.getKeyPressed()!=ord('q'):
    bot.sync(C, .1)
    y, J = C.eval(ry.FS.position, ["l_gripper"], [[1,0,0]])
    bot.setCompliance(J, 1.)
    print(' direct:', J @ bot.get_tauExternal(),
          ' pseudoInv:', np.linalg.pinv(J.T, rcond=1e-3) @ bot.get_tauExternal())

bot.setCompliance([], 0.)
[ ]:
bot.home(C)
[ ]:
del bot
del C

Testing the depth camera

The following grabs an attached realsense camera, which in our setup is attached by default to the wrist. The robot is put in floating mode, so that you can move the camera. At the same time, the point cloud is attached in the model configuration to the camera frame – and forward kinematics displays it in world coordinates. That way you can visually see how well you configuration model is calibrated to your real world – and how noisy the depth camera really is.

[ ]:
import robotic as ry
import numpy as np
import time
[ ]:
C = ry.Config()
C.addFile(ry.raiPath('scenarios/pandaSingle.g'))
C.view(False, 'this is your workspace data structure C -- NOT THE SIMULTATION')
[ ]:
# True = real robot!!
bot = ry.BotOp(C, True)
[ ]:
bot.hold(floating=True, damping=False)
[ ]:
pclFrame = C.addFrame('pcl', 'cameraWrist')

while bot.getKeyPressed()!=ord('q'):
    rgb, depth, points = bot.getImageDepthPcl('cameraWrist')
    pclFrame.setPointCloud(points, rgb)

    bot.sync(C, .1)
[ ]:
del bot
del C
[ ]: