Starting with a real robot

The BotOp interface should work equally for a simulated and real robot. Note that currently only a locally compiled robotic package supports connecting to a Franka robot and Realsense camera. Here some first cautious steps to get started with a real robot.

First robot motion, camera & pcl, and gripper motion

[ ]:
from robotic import ry
import numpy as np
import time
print(ry.compiled())
[ ]:
#in case you switch to simulation
ry.params_add({'botsim/verbose': 1., 'physx/motorKp': 10000., 'physx/motorKd': 1000.})
ry.params_print()

First, also for a real robot we first load the configuration and maintain our own workspace configuration.

[ ]:
C = ry.Config()
C.addFile(ry.raiPath('../rai-robotModels/scenarios/pandaSingle.g'))
pcl = C.addFrame('pcl', 'cameraWrist')
C.view(False, 'this is your workspace data structure C -- NOT THE SIMULTATION')

Now we start botop with real=True. By passing the model configuration C, the system knows which and how many robots there should be an tries to connect with them.

[ ]:
# True = real robot!!
bot = ry.BotOp(C, useRealRobot=True)

If that failed, you could ry.params_print() to see which global parameters were used and whether you should change them.

[ ]:
if bot.get_t()==0: #if the above failed, use a sim...
    del bot
    bot = ry.BotOp(C, useRealRobot=False)

A first motion:

[ ]:
q = bot.get_qHome()
q[1] = q[1] + .2

bot.moveTo(q)
bot.wait(C)
print('first motion done')

bot.moveTo(bot.get_qHome())
bot.wait(C)

Grabbing a camera image & pcl, adding the pcl to the workspace C:

[ ]:
pcl = C.getFrame("pcl")
pcl.setShape(ry.ST.pointCloud, [2]) # the size here is pixel size for display
bot.sync(C)

while bot.getKeyPressed()!=ord('q'):
    image, depth, points = bot.getImageDepthPcl("cameraWrist")
    pcl.setPointCloud(points, image)
    pcl.setColor([1,0,0])
    bot.sync(C, .1)

Closing & opening the gripper:

[ ]:
#slow close
bot.gripperMove(ry._left, width=.0, speed=.1)

while not bot.gripperDone(ry._left):
    bot.sync(C)

#fast open
bot.gripperMove(ry._left, width=.08, speed=1.)

while not bot.gripperDone(ry._left):
    bot.sync(C)

Always shut down the robot properly by destroying the handle:

[ ]:
del bot
del C

Advanced: Compliance & Force/Torque feedback

[ ]:
from robotic import ry
import numpy as np
import time
[ ]:
C = ry.Config()
C.addFile(ry.raiPath('scenarios/pandaSingle.g'))
C.view(False, 'this is your workspace data structure C -- NOT THE SIMULTATION')
[ ]:
# True = real robot!!
bot = ry.BotOp(C, True)

After opening the robot, it is holding its position. Try moving it you can feel the gains.

[ ]:
C.view(True, 'floating=False, damping=True -- Try moving the robot by hand!\n-- press any key to continue --')

We can let it float (=setting the reference always to q_current) and turn off the damping, which makes the robot move more freely:

[ ]:
bot.hold(floating=True, damping=False)
C.view(True, 'floating=True, damping=False -- Try moving the robot by hand!\n-- press any key to continue --')

We can also float with daming:

[ ]:
bot.hold(floating=True, damping=True)
C.view(True, 'floating=True, damping=True -- Try moving the robot by hand!\n-- press any key to continue --')

The hold methods above might be useful for kinestetic teaching or so (you can always keep C sync’ed and compute any features while moving the robot).

But for autonomous compliant manipulation we want to follow a reference and impose compliance in following this reference along some task space dimensions only. I.e., a task space compliance.

[ ]:
bot.moveTo(bot.get_qHome(), 1.)
bot.wait(C)

while bot.getKeyPressed()!=ord('q'):
    bot.sync(C, .1)
    y, J = C.eval(ry.FS.position, ["l_gripper"], [[1,0,0]])
    bot.setCompliance(J, 1.)
    print(' direct:', J @ bot.get_tauExternal(),
          ' pseudoInv:', np.linalg.pinv(J.T, rcond=1e-3) @ bot.get_tauExternal())

bot.setCompliance([], 0.)
[ ]:
bot.home(C)
[ ]:
del bot
del C

Testing the depth camera

The following grabs an attached realsense camera, which in our setup is attached by default to the wrist. The robot is put in floating mode, so that you can move the camera. At the same time, the point cloud is attached in the model configuration to the camera frame – and forward kinematics displays it in world coordinates. That way you can visually see how well you configuration model is calibrated to your real world – and how noisy the depth camera really is.

[ ]:
from robotic import ry
import numpy as np
import time
[ ]:
C = ry.Config()
C.addFile(ry.raiPath('scenarios/pandaSingle.g'))
C.view(False, 'this is your workspace data structure C -- NOT THE SIMULTATION')
[ ]:
# True = real robot!!
bot = ry.BotOp(C, False)
[ ]:
bot.hold(floating=True, damping=False)
[13]:
pclFrame = C.addFrame('pcl', 'cameraWrist')

while bot.getKeyPressed()!=ord('q'):
    rgb, depth, points = bot.getImageDepthPcl('cameraWrist')
    pclFrame.setPointCloud(points, rgb)

    bot.sync(C, .1)
[ ]: