rai python API

rai bindings

class robotic.ry.ArgWord

Members:

_left

_right

_sequence

_path

property name
class robotic.ry.BotOp

Robot Operation interface – see https://marctoussaint.github.io/robotics-course/tutorials/1b-botop.html

getCameraFxycxy(self: robotic.ry.BotOp, sensorName: str) arr

returns camera intrinsics

getImageAndDepth(self: robotic.ry.BotOp, sensorName: str) tuple

returns image and depth from a camera sensor

getImageDepthPcl(self: robotic.ry.BotOp, sensorName: str, globalCoordinates: bool = False) tuple

returns image, depth and point cloud (assuming sensor knows intrinsics) from a camera sensor, optionally in global instead of camera-frame-relative coordinates

getKeyPressed(self: robotic.ry.BotOp) int

get key pressed in window at last sync

getTimeToEnd(self: robotic.ry.BotOp) float

get time-to-go of the current spline reference that is tracked (use getTimeToEnd()<=0. to check if motion execution is done)

get_q(self: robotic.ry.BotOp) arr

get the current (real) robot joint vector

get_qDot(self: robotic.ry.BotOp) arr

get the current (real) robot joint velocities

get_qHome(self: robotic.ry.BotOp) arr

returns home joint vector (defined as the configuration C when you created BotOp)

get_t(self: robotic.ry.BotOp) float

returns the control time (absolute time managed by the high freq tracking controller)

get_tauExternal(self: robotic.ry.BotOp) arr

get the current (real) robot joint torques (external: gravity & acceleration removed) – each call averages from last call; first call might return nonsense!

gripperClose(self: robotic.ry.BotOp, leftRight: robotic.ry.ArgWord, force: float = 10.0, width: float = 0.05, speed: float = 0.1) None

close gripper

gripperCloseGrasp(self: robotic.ry.BotOp, leftRight: robotic.ry.ArgWord, objName: str, force: float = 10.0, width: float = 0.05, speed: float = 0.1) None

close gripper and indicate what should be grasped – makes no different in real, but helps simulation to mimic grasping more reliably

gripperDone(self: robotic.ry.BotOp, leftRight: robotic.ry.ArgWord) bool

returns if gripper is done

gripperMove(self: robotic.ry.BotOp, leftRight: robotic.ry.ArgWord, width: float = 0.075, speed: float = 0.2) None

move the gripper to width (default: open)

gripperPos(self: robotic.ry.BotOp, leftRight: robotic.ry.ArgWord) float

returns the gripper pos

hold(self: robotic.ry.BotOp, floating: bool = False, damping: bool = True) None

hold the robot with a trivial PD controller, floating means reference = real, without damping the robot is free floating

home(self: robotic.ry.BotOp, C: robotic.ry.Config) None

immediately drive the robot home (see get_qHome); keeps argument C synced; same as moveTo(qHome, 1., True); wait(C);

move(self: robotic.ry.BotOp, path: arr, times: arr, overwrite: bool = False, overwriteCtrlTime: float = -1.0) None

core motion command: set a spline motion reference; if only a single time [T] is given for multiple waypoints, it assumes equal time spacing with TOTAL time T

By default, the given spline is APPENDED to the current reference spline. The user can also enforce the given spline to overwrite the current reference starting at the given absolute ctrlTime. This allows implementation of reactive (e.g. MPC-style) control. However, the user needs to take care that overwriting is done in a smooth way, i.e., that the given spline starts with a pos/vel that is close to the pos/vel of the current reference at the given ctrlTime.

moveAutoTimed(self: robotic.ry.BotOp, path: arr, maxVel: float = 1.0, maxAcc: float = 1.0) None

helper to execute a path (typically fine resolution, from KOMO or RRT) with equal time spacing chosen for given max vel/acc

moveTo(self: robotic.ry.BotOp, q_target: arr, timeCost: float = 1.0, overwrite: bool = False) None

helper to move to a single joint vector target, where timing is chosen optimally based on the given timing cost

When using overwrite, this immediately steers to the target – use this as a well-timed reactive q_target controller

setCompliance(self: robotic.ry.BotOp, J: arr, compliance: float = 0.5) None

set a task space compliant, where J defines the task space Jacobian, and compliance goes from 0 (no compliance) to 1 (full compliance, but still some damping)

setControllerWriteData(self: robotic.ry.BotOp, arg0: int) None

[for internal debugging only] triggers writing control data into a file

stop(self: robotic.ry.BotOp, C: robotic.ry.Config) None

immediately stop the robot; keeps argument C synced; same as moveTo(get_q(), 1., True); wait(C);

sync(self: robotic.ry.BotOp, C: robotic.ry.Config, waitTime: float = 0.1) int

sync your workspace configuration C with the robot state

wait(self: robotic.ry.BotOp, C: robotic.ry.Config, forKeyPressed: bool = True, forTimeToEnd: bool = True) int

repeatedly sync your workspace C until a key is pressed or motion ends (optionally)

class robotic.ry.CameraView

Offscreen rendering

computeImageAndDepth(self: robotic.ry.CameraView, config: robotic.ry.Config, visualsOnly: bool = True) tuple

returns image and depth from a camera sensor; the ‘config’ argument needs to be the same configuration as in the constructor, but in new state

computeSegmentationID(self: robotic.ry.CameraView) Array<T>

return a uint16 array with object ID segmentation

computeSegmentationImage(self: robotic.ry.CameraView) Array<T>

return an rgb image encoding the object ID segmentation

getFxyCxy(self: robotic.ry.CameraView) arr

return the camera intrinsics f_x, f_y, c_x, c_y

setCamera(self: robotic.ry.CameraView, cameraFrameName: str) rai::CameraView::Sensor

select a camera, typically a frame that has camera info attributes

class robotic.ry.Config

Core data structure to represent a kinematic configuration (essentially a tree of frames). See https://marctoussaint.github.io/robotics-course/tutorials/1a-configurations.html

addConfigurationCopy(self: robotic.ry.Config, config: robotic.ry.Config, tau: float = 1.0) None
addFile(self: robotic.ry.Config, filename: str, namePrefix: str = '') robotic.ry.Frame

add the contents of the file to C

addFrame(self: robotic.ry.Config, name: str, parent: str = '', args: str = '') robotic.ry.Frame

add a new frame to C; optionally make this a child to the given parent; use the Frame methods to set properties of the new frame

animate(self: robotic.ry.Config) None

displays while articulating all dofs in a row

attach(self: robotic.ry.Config, arg0: str, arg1: str) None

change the configuration by creating a rigid joint from frame1 to frame2, adopting their current relative pose. This also breaks the first joint that is parental to frame2 and reverses the topological order from frame2 to the broken joint

clear(self: robotic.ry.Config) None

clear all frames and additional data; becomes the empty configuration, with no frames

computeCollisions(self: robotic.ry.Config) None

call the broadphase collision engine (SWIFT++ or FCL) to generate the list of collisions (or near proximities) between all frame shapes that have the collision tag set non-zero

copy(self: robotic.ry.Config, C2: robotic.ry.Config) None

make C a (deep) copy of the given C2

delFrame(self: robotic.ry.Config, frameName: str) None

destroy and remove a frame from C

equationOfMotion(self: robotic.ry.Config, qdot: List[float], gravity: bool) tuple
eval(self: robotic.ry.Config, featureSymbol: robotic.ry.FS, frames: StringA = [], scale: arr = array(1.e-05), target: arr = array(0.0078125), order: int = -1) tuple

evaluate a feature – see https://marctoussaint.github.io/robotics-course/tutorials/features.html

feature(self: robotic.ry.Config, featureSymbol: robotic.ry.FS, frameNames: List[str] = [], scale: List[float] = [], target: List[float] = [], order: int = -1) Feature

create a feature (a differentiable map from joint state to a vector space), as they’re typically used for IK or optimization. See the dedicated tutorial for details. featureSymbol defines which mapping this is (position, vectors, collision distance, etc). many mapping refer to one or several frames, which need to be specified using frameNames

frame(self: robotic.ry.Config, frameName: str) robotic.ry.Frame

get access to a frame by name; use the Frame methods to set/get frame properties

frames(self: robotic.ry.Config) List[robotic.ry.Frame]
getCollisions(self: robotic.ry.Config, belowMargin: float = 1.0) list

return the results of collision computations: a list of 3 tuples with (frame1, frame2, distance). Optionally report only on distances below a margin To get really precise distances and penetrations use the FS.distance feature with the two frame names

getDofIDs(self: robotic.ry.Config) List[int]
getFrame(self: robotic.ry.Config, frameName: str) robotic.ry.Frame

get access to a frame by name; use the Frame methods to set/get frame properties

getFrameDimension(self: robotic.ry.Config) int

get the total number of frames

getFrameNames(self: robotic.ry.Config) List[str]

get the list of frame names

getFrameState(*args, **kwargs)

Overloaded function.

  1. getFrameState(self: robotic.ry.Config) -> numpy.ndarray[numpy.float64]

get the frame state as a n-times-7 numpy matrix, with a 7D pose per frame

  1. getFrameState(self: robotic.ry.Config, arg0: str) -> numpy.ndarray[numpy.float64]

TODO remove -> use individual frame!

getJointDimension(self: robotic.ry.Config) int

get the total number of degrees of freedom

getJointNames(self: robotic.ry.Config) StringA

get the list of joint names

getJointState(self: robotic.ry.Config) arr

get the joint state as a numpy vector, optionally only for a subset of joints specified as list of joint names

getTotalPenetration(self: robotic.ry.Config) float

returns the sum of all penetrations

makeObjectsConvex(self: robotic.ry.Config) None

remake all meshes associated with all frames to become their convex hull

report(self: robotic.ry.Config) str
selectJoints(self: robotic.ry.Config, jointNames: List[str], notThose: bool = False) None

redefine what are considered the DOFs of this configuration: only joints listed in jointNames are considered part of the joint state and define the number of DOFs

setFrameState(*args, **kwargs)

Overloaded function.

  1. setFrameState(self: robotic.ry.Config, X: List[float], frames: List[str] = []) -> None

set the frame state, optionally only for a subset of frames specified as list of frame names

  1. setFrameState(self: robotic.ry.Config, X: numpy.ndarray, frames: List[str] = []) -> None

set the frame state, optionally only for a subset of frames specified as list of frame names

setJointState(self: robotic.ry.Config, q: arr, joints: list = []) None

set the joint state, optionally only for a subset of joints specified as list of joint names

setJointStateSlice(self: robotic.ry.Config, arg0: List[float], arg1: int) None
sortFrames(self: robotic.ry.Config) None

resort the internal order of frames according to the tree topology. This is important before saving the configuration.

stepDynamics(self: robotic.ry.Config, qdot: List[float], u_control: List[float], tau: float, dynamicNoise: float, gravity: bool) numpy.ndarray[numpy.float64]
view(self: robotic.ry.Config, pause: bool = False, message: str = None) int

open a view window for the configuration

view_close(self: robotic.ry.Config) None

close the view

view_focalLength(self: robotic.ry.Config) float

return the focal length of the view camera (only intrinsic parameter)

view_fxycxy(self: robotic.ry.Config) arr

return (fx, fy, cx, cy): the focal length and image center in PIXEL UNITS

view_getDepth(self: robotic.ry.Config) numpy.ndarray[numpy.float32]
view_getRgb(self: robotic.ry.Config) numpy.ndarray[numpy.uint8]
view_playVideo(self: robotic.ry.Config, delay: float = 1.0, saveVideoPath: str = None) None
view_pose(self: robotic.ry.Config) arr

return the 7D pose of the view camera

view_recopyMeshes(self: robotic.ry.Config) None
view_savePng(self: robotic.ry.Config, pathPrefix: str = 'z.vid/') None

saves a png image of the current view, numbered with a global counter, with the intention to make a video

view_setCamera(self: robotic.ry.Config, arg0: robotic.ry.Frame) None

set the camera pose to a frame, and check frame attributes for intrinsic parameters (focalLength, width height)

watchFile(self: robotic.ry.Config, arg0: str) None

launch a viewer that listents (inode) to changes of a file (made by you in an editor), and reloads, displays and animates the configuration whenever the file is changed

write(self: robotic.ry.Config) str

write the full configuration in a string (roughly yaml), e.g. for file export

writeCollada(self: robotic.ry.Config, arg0: str, arg1: str) None

write the full configuration in a collada file for export

writeURDF(self: robotic.ry.Config) str

write the full configuration as URDF in a string, e.g. for file export

class robotic.ry.ControlMode

Members:

none

position

velocity

acceleration

spline

property name
class robotic.ry.FS

Members:

position

positionDiff

positionRel

quaternion

quaternionDiff

quaternionRel

pose

poseDiff

poseRel

vectorX

vectorXDiff

vectorXRel

vectorY

vectorYDiff

vectorYRel

vectorZ

vectorZDiff

vectorZRel

scalarProductXX

scalarProductXY

scalarProductXZ

scalarProductYX

scalarProductYY

scalarProductYZ

scalarProductZZ

gazeAt

angularVel

accumulatedCollisions

jointLimits

distance

negDistance

oppose

qItself

jointState

aboveBox

insideBox

pairCollision_negScalar

pairCollision_vector

pairCollision_normal

pairCollision_p1

pairCollision_p2

standingAbove

physics

contactConstraints

energy

transAccelerations

transVelocities

property name
class robotic.ry.Feature

to be removed - use only directly from Config

description(self: robotic.ry.Feature, arg0: robotic.ry.Config) str
eval(self: robotic.ry.Feature, arg0: robotic.ry.Config) tuple
setOrder(self: robotic.ry.Feature, arg0: int) robotic.ry.Feature
setScale(self: robotic.ry.Feature, arg0: arr) robotic.ry.Feature
setTarget(self: robotic.ry.Feature, arg0: arr) robotic.ry.Feature
class robotic.ry.Frame

A (coordinate) of a configuration, which can have a parent, and associated shape, joint, and/or inertia

addAttributes(self: robotic.ry.Frame, arg0: dict) None

add/set attributes for the frame

getAttributes(self: robotic.ry.Frame) dict

get frame attributes

getJointState(self: robotic.ry.Frame) arr
getMeshPoints(self: robotic.ry.Frame) arr
getMeshTriangles(self: robotic.ry.Frame) uintA
getPosition(self: robotic.ry.Frame) arr
getQuaternion(self: robotic.ry.Frame) arr
getRelativePosition(self: robotic.ry.Frame) arr
getRelativeQuaternion(self: robotic.ry.Frame) arr
getRotationMatrix(self: robotic.ry.Frame) arr
getSize(self: robotic.ry.Frame) arr
info(self: robotic.ry.Frame) dict
setAttribute(self: robotic.ry.Frame, arg0: str, arg1: float) robotic.ry.Frame
setColor(self: robotic.ry.Frame, arg0: arr) robotic.ry.Frame
setContact(self: robotic.ry.Frame, arg0: int) robotic.ry.Frame
setJoint(self: robotic.ry.Frame, arg0: robotic.ry.JT) robotic.ry.Frame
setJointState(self: robotic.ry.Frame, arg0: arr) robotic.ry.Frame
setMass(self: robotic.ry.Frame, arg0: float) robotic.ry.Frame
setMeshAsLines(self: robotic.ry.Frame, arg0: List[float]) None
setParent(self: robotic.ry.Frame, parent: robotic.ry.Frame, keepAbsolutePose_and_adaptRelativePose: bool = False, checkForLoop: bool = False) robotic.ry.Frame
setPointCloud(self: robotic.ry.Frame, points: numpy.ndarray, colors: numpy.ndarray[numpy.uint8] = array([], dtype=uint8)) None
setPose(self: robotic.ry.Frame, arg0: str) None
setPosition(self: robotic.ry.Frame, arg0: arr) robotic.ry.Frame
setQuaternion(self: robotic.ry.Frame, arg0: arr) robotic.ry.Frame
setRelativePose(self: robotic.ry.Frame, arg0: str) None
setRelativePosition(self: robotic.ry.Frame, arg0: arr) robotic.ry.Frame
setRelativeQuaternion(self: robotic.ry.Frame, arg0: arr) robotic.ry.Frame
setShape(self: robotic.ry.Frame, type: robotic.ry.ST, size: arr) robotic.ry.Frame
class robotic.ry.ImpType

Members:

closeGripper

moveGripper

depthNoise

rgbNoise

adversarialDropper

objectImpulses

noPenetrations

property name
class robotic.ry.JT

Members:

hingeX

hingeY

hingeZ

transX

transY

transZ

transXY

trans3

transXYPhi

transYPhi

universal

rigid

quatBall

phiTransXY

XBall

free

generic

tau

property name
class robotic.ry.KOMO

Constrained solver to optimize configurations or paths. (KOMO = k-order Markov Optimization) – see https://marctoussaint.github.io/robotics-course/tutorials/1c-komo.html

addControlObjective(self: robotic.ry.KOMO, times: arr, order: int, scale: float = 1.0, target: arr = array(0.0078125), deltaFromSlice: int = 0, deltaToSlice: int = 0) Objective
  • times: (as for addObjective) the phase-interval in which this objective holds; [] means all times

  • order: Do we penalize the jointState directly (order=0: penalizing sqr distance to qHome, order=1: penalizing sqr distances between consecutive configurations (velocities), order=2: penalizing accelerations across 3 configurations)

  • scale: as usual, but modulated by a factor ‘sqrt(delta t)’ that somehow ensures total control costs in approximately independent of the choice of stepsPerPhase

addInteraction_elasticBounce(self: robotic.ry.KOMO, time: float, from: str, to: str, elasticity: float = 0.8, stickiness: float = 0.0) None
addModeSwitch(self: robotic.ry.KOMO, times: arr, newMode: robotic.ry.SY, frames: StringA, firstSwitch: bool = True) None
addObjective(self: robotic.ry.KOMO, times: arr, feature: robotic.ry.FS, frames: List[str] = [], type: ObjectiveType, scale: arr = array(0.0078125), target: arr = array(0.0078125), order: int = -1) None

central method to define objectives in the KOMO NLP: * times: the time intervals (subset of configurations in a path) over which this feature is active (irrelevant for IK) * feature: the feature symbol (see advanced Feature tutorial) * frames: the frames for which the feature is computed, given as list of frame names * type: whether this is a sum-of-squares (sos) cost, or eq or ineq constraint * scale: the matrix(!) by which the feature is multiplied * target: the offset which is substracted from the feature (before scaling)

addQuaternionNorms(self: robotic.ry.KOMO, times: arr = array(0.0078125), scale: float = 3.0, hard: bool = True) None
addTimeOptimization(self: robotic.ry.KOMO) None
clearObjectives(self: robotic.ry.KOMO) None
getFeatureNames(self: robotic.ry.KOMO) StringA

returns a long list of features (per time slice!), to be used by an NLP_Solver

getForceInteractions(self: robotic.ry.KOMO) list
getFrameState(self: robotic.ry.KOMO, arg0: int) arr
getPath(self: robotic.ry.KOMO) arr
getPathFrames(self: robotic.ry.KOMO) arr
getPathTau(self: robotic.ry.KOMO) arr
getPath_qAll(self: robotic.ry.KOMO) arrA
getReport(self: robotic.ry.KOMO, arg0: bool) dict
getT(self: robotic.ry.KOMO) int
initOrg(self: robotic.ry.KOMO) None
initPhaseWithDofsPath(self: robotic.ry.KOMO, t_phase: int, dofIDs: uintA, path: arr, autoResamplePath: bool = False) None
initRandom(self: robotic.ry.KOMO, verbose: int = 0) None
initWithConstant(self: robotic.ry.KOMO, q: arr) None
initWithPath_qOrg(self: robotic.ry.KOMO, q: arr) None
initWithWaypoints(self: robotic.ry.KOMO, waypoints: arrA, waypointSlicesPerPhase: int = 1, interpolate: bool = False, verbose: int = -1) uintA
nlp(self: robotic.ry.KOMO) NLP

return the problem NLP

report(self: robotic.ry.KOMO, specs: bool = False, plotOverTime: bool = False) dict

returns a dict with full report on features, optionally also on problem specs and plotting costs/violations over time

reportProblem(self: robotic.ry.KOMO) str
setConfig(self: robotic.ry.KOMO, config: robotic.ry.Config, enableCollisions: bool) None

[deprecated] please set directly in constructor

setTiming(self: robotic.ry.KOMO, phases: float, slicesPerPhase: int, durationPerPhase: float, kOrder: int) None

[deprecated] please set directly in constructor

updateRootObjects(self: robotic.ry.KOMO, config: robotic.ry.Config) None

update root frames (without parents) within all KOMO configurations

view(self: robotic.ry.KOMO, pause: bool = False, txt: str = None) int
view_close(self: robotic.ry.KOMO) None
view_play(self: robotic.ry.KOMO, pause: bool = False, delay: float = 0.1, saveVideoPath: str = None) int
class robotic.ry.NLP

Representation of a Nonlinear Mathematical Program

evaluate(self: robotic.ry.NLP, arg0: arr) Tuple[arr, arr]

query the NLP at a point $x$; returns the tuple $(phi,J)$, which is the feature vector and its Jacobian; features define cost terms, sum-of-square (sos) terms, inequalities, and equalities depending on ‘getFeatureTypes’

getBounds(self: robotic.ry.NLP) Tuple[arr, arr]

returns the tuple $(b_{lo},b_{up})$, where both vectors are of same dimensionality of $x$ (or size zero, if there are no bounds)

getDimension(self: robotic.ry.NLP) int

return the dimensionality of $x$

getFHessian(self: robotic.ry.NLP, arg0: arr) arr

returns Hessian of the sum of $f$-terms

getFeatureTypes(self: robotic.ry.NLP) List[ObjectiveType]

features (entries of $phi$) can be of one of (ry.OT.f, ry.OT.sos, ry.OT.ineq, ry.OT.eq), which means (cost, sum-of-square, inequality, equality). The total cost $f(x)$ is the sum of all f-terms plus sum-of-squares of sos-terms.

getInitializationSample(self: robotic.ry.NLP, previousOptima: arr = array(0.0078125)) arr

returns a sample (e.g. uniform within bounds) to initialize an optimization – not necessarily feasible

report(self: robotic.ry.NLP, arg0: int) str

displays semantic information on the last query

class robotic.ry.NLP_Factory
setBounds(self: robotic.ry.NLP_Factory, arg0: arr, arg1: arr) None
setDimension(self: robotic.ry.NLP_Factory, arg0: int) None
setEvalCallback(self: robotic.ry.NLP_Factory, arg0: Callable[[arr], Tuple[arr, arr]]) None
setFeatureTypes(self: robotic.ry.NLP_Factory, arg0: Array<T>) None
testCallingEvalCallback(self: robotic.ry.NLP_Factory, arg0: arr) Tuple[arr, arr]
class robotic.ry.NLP_Solver

An interface to portfolio of solvers

getOptions(self: robotic.ry.NLP_Solver) robotic.ry.NLP_SolverOptions
getTrace_J(self: robotic.ry.NLP_Solver) arr
getTrace_costs(self: robotic.ry.NLP_Solver) arr
getTrace_phi(self: robotic.ry.NLP_Solver) arr
getTrace_x(self: robotic.ry.NLP_Solver) arr
reportLagrangeGradients(self: robotic.ry.NLP_Solver, arg0: StringA) dict
setOptions(self: robotic.ry.NLP_Solver, verbose: int = 1, stopTolerance: float = 0.01, stopFTolerance: float = -1.0, stopGTolerance: float = -1.0, stopEvals: int = 1000, maxStep: float = 0.2, damping: float = 1.0, stepInc: float = 1.5, stepDec: float = 0.5, wolfe: float = 0.01, muInit: float = 1.0, muInc: float = 5.0, muMax: float = 10000.0, muLBInit: float = 0.1, muLBDec: float = 0.2) robotic.ry.NLP_Solver

set solver options

setProblem(self: robotic.ry.NLP_Solver, arg0: robotic.ry.NLP) robotic.ry.NLP_Solver
setSolver(self: robotic.ry.NLP_Solver, arg0: NLP_SolverID) robotic.ry.NLP_Solver
setTracing(self: robotic.ry.NLP_Solver, arg0: bool, arg1: bool, arg2: bool, arg3: bool) robotic.ry.NLP_Solver
solve(self: robotic.ry.NLP_Solver, resampleInitialization: int = -1) SolverReturn
class robotic.ry.NLP_SolverID

Members:

gradientDescent

rprop

LBFGS

newton

augmentedLag

squaredPenalty

logBarrier

singleSquaredPenalty

NLopt

Ipopt

Ceres

property name
class robotic.ry.NLP_SolverOptions

solver options

dict(self: robotic.ry.NLP_SolverOptions) dict
set_damping(self: robotic.ry.NLP_SolverOptions, arg0: float) robotic.ry.NLP_SolverOptions
set_maxStep(self: robotic.ry.NLP_SolverOptions, arg0: float) robotic.ry.NLP_SolverOptions
set_muInc(self: robotic.ry.NLP_SolverOptions, arg0: float) robotic.ry.NLP_SolverOptions
set_muInit(self: robotic.ry.NLP_SolverOptions, arg0: float) robotic.ry.NLP_SolverOptions
set_muLBDec(self: robotic.ry.NLP_SolverOptions, arg0: float) robotic.ry.NLP_SolverOptions
set_muLBInit(self: robotic.ry.NLP_SolverOptions, arg0: float) robotic.ry.NLP_SolverOptions
set_muMax(self: robotic.ry.NLP_SolverOptions, arg0: float) robotic.ry.NLP_SolverOptions
set_stepDec(self: robotic.ry.NLP_SolverOptions, arg0: float) robotic.ry.NLP_SolverOptions
set_stepInc(self: robotic.ry.NLP_SolverOptions, arg0: float) robotic.ry.NLP_SolverOptions
set_stopEvals(self: robotic.ry.NLP_SolverOptions, arg0: int) robotic.ry.NLP_SolverOptions
set_stopFTolerance(self: robotic.ry.NLP_SolverOptions, arg0: float) robotic.ry.NLP_SolverOptions
set_stopGTolerance(self: robotic.ry.NLP_SolverOptions, arg0: float) robotic.ry.NLP_SolverOptions
set_stopTolerance(self: robotic.ry.NLP_SolverOptions, arg0: float) robotic.ry.NLP_SolverOptions
set_verbose(self: robotic.ry.NLP_SolverOptions, arg0: int) robotic.ry.NLP_SolverOptions
set_wolfe(self: robotic.ry.NLP_SolverOptions, arg0: float) robotic.ry.NLP_SolverOptions
class robotic.ry.OT

Members:

none

f

sos

ineq

eq

ineqB

ineqP

property name
class robotic.ry.PathFinder

todo doc

setExplicitCollisionPairs(self: robotic.ry.PathFinder, collisionPairs: StringA) robotic.ry.PathFinder
setProblem(self: robotic.ry.PathFinder, Configuration: robotic.ry.Config, starts: arr, goals: arr) robotic.ry.PathFinder
solve(self: robotic.ry.PathFinder) SolverReturn
step(self: robotic.ry.PathFinder) bool
class robotic.ry.ST

Members:

none

box

sphere

capsule

mesh

cylinder

marker

pointCloud

ssCvx

ssBox

ssCylinder

ssBoxElip

quad

camera

sdf

property name
class robotic.ry.SY

Members:

touch

above

inside

oppose

restingOn

poseEq

positionEq

stableRelPose

stablePose

stable

stableOn

dynamic

dynamicOn

dynamicTrans

quasiStatic

quasiStaticOn

downUp

break

stableZero

contact

contactStick

contactComplementary

bounce

push

magic

magicTrans

pushAndPlace

topBoxGrasp

topBoxPlace

dampMotion

identical

alignByInt

makeFree

forceBalance

relPosY

touchBoxNormalX

touchBoxNormalY

touchBoxNormalZ

boxGraspX

boxGraspY

boxGraspZ

lift

stableYPhi

stableOnX

stableOnY

end

property name
class robotic.ry.Simulation

A direct simulation interface to physics engines (Nvidia PhysX, Bullet) – see https://marctoussaint.github.io/robotics-course/tutorials/simulation.html

addImp(self: robotic.ry.Simulation, arg0: robotic.ry.ImpType, arg1: StringA, arg2: arr) None
addSensor(self: robotic.ry.Simulation, sensorName: str, frameAttached: str = '', width: int = 640, height: int = 360, focalLength: float = -1.0, orthoAbsHeight: float = -1.0, zRange: arr = []) rai::CameraView::Sensor
closeGripper(self: robotic.ry.Simulation, gripperFrameName: str, width: float = 0.05, speed: float = 0.3, force: float = 20.0) None
depthData2pointCloud(self: robotic.ry.Simulation, arg0: numpy.ndarray[numpy.float32], arg1: List[float]) numpy.ndarray[numpy.float64]
getGripperIsGrasping(self: robotic.ry.Simulation, gripperFrameName: str) bool
getGripperWidth(self: robotic.ry.Simulation, gripperFrameName: str) float
getGroundTruthPosition(self: robotic.ry.Simulation, arg0: str) numpy.ndarray[numpy.float64]
getGroundTruthRotationMatrix(self: robotic.ry.Simulation, arg0: str) numpy.ndarray[numpy.float64]
getGroundTruthSize(self: robotic.ry.Simulation, arg0: str) numpy.ndarray[numpy.float64]
getImageAndDepth(self: robotic.ry.Simulation) tuple
getScreenshot(self: robotic.ry.Simulation) Array<T>
getState(self: robotic.ry.Simulation) tuple

returns a 4-tuple or frame state, joint state, frame velocities (linear & angular), joint velocities

getTimeToMove(self: robotic.ry.Simulation) float
get_q(self: robotic.ry.Simulation) arr
get_qDot(self: robotic.ry.Simulation) arr
loadTeleopCallbacks(self: robotic.ry.Simulation) None
openGripper(self: robotic.ry.Simulation, gripperFrameName: str, width: float = 0.075, speed: float = 0.3) None
pushConfigurationToSimulator(self: robotic.ry.Simulation, frameVelocities: arr = array(0.0078125), jointVelocities: arr = array(0.0078125)) None

set the simulator to the full (frame) state of the configuration

selectSensor(self: robotic.ry.Simulation, sensorName: str) rai::CameraView::Sensor
setMoveto(self: robotic.ry.Simulation, path: arr, t: float, append: bool = True) None

set the spline reference to genreate motion

setState(self: robotic.ry.Simulation, frameState: arr, jointState: arr = array(0.0078125), frameVelocities: arr = array(0.0078125), jointVelocities: arr = array(0.0078125)) None
step(self: robotic.ry.Simulation, u_control: arr, tau: float = 0.01, u_mode: robotic.ry.ControlMode = <ControlMode.velocity: 2>) None
class robotic.ry.SimulationEngine

Members:

physx

bullet

kinematic

property name
class robotic.ry.SolverReturn

return of nlp solve call

dict(self: robotic.ry.SolverReturn) dict
robotic.ry.compiled() str

return a compile date+time version string

robotic.ry.depthImage2PointCloud(depth: numpy.ndarray[numpy.float32], fxycxy: arr) arr

return the point cloud from the depth image

robotic.ry.getStartGoalPath(arg0: robotic.ry.Config, arg1: arr, arg2: arr) arr
robotic.ry.params_add(arg0: dict) None

add/set parameters

robotic.ry.params_clear() None

clear all parameters

robotic.ry.params_file(arg0: str) None

add parameters from a file

robotic.ry.params_print() None

print the parameters

robotic.ry.raiPath(arg0: str) rai::String

get a path relative to rai base path

robotic.ry.setRaiPath(arg0: str) None

redefine the rai (or rai-robotModels) path