robotic python API

rai bindings

class robotic._robotic.ArgWord

[todo: replace by str]

Members:

_left

_right

_sequence

_path

property name
class robotic._robotic.BotOp

Robot Operation interface – see https://marctoussaint.github.io/robotics-course/tutorials/1b-botop.html

getCameraFxycxy(self: robotic._robotic.BotOp, sensorName: str) arr

returns camera intrinsics

getGripperPos(self: robotic._robotic.BotOp, leftRight: robotic._robotic.ArgWord) float

returns the gripper pos

getImageAndDepth(self: robotic._robotic.BotOp, sensorName: str) tuple

returns image and depth from a camera sensor

getImageDepthPcl(self: robotic._robotic.BotOp, sensorName: str, globalCoordinates: bool = False) tuple

returns image, depth and point cloud (assuming sensor knows intrinsics) from a camera sensor, optionally in global instead of camera-frame-relative coordinates

getKeyPressed(self: robotic._robotic.BotOp) int

get key pressed in window at last sync

getTimeToEnd(self: robotic._robotic.BotOp) float

get time-to-go of the current spline reference that is tracked (use getTimeToEnd()<=0. to check if motion execution is done)

get_q(self: robotic._robotic.BotOp) arr

get the current (real) robot joint vector

get_qDot(self: robotic._robotic.BotOp) arr

get the current (real) robot joint velocities

get_qHome(self: robotic._robotic.BotOp) arr

returns home joint vector (defined as the configuration C when you created BotOp)

get_t(self: robotic._robotic.BotOp) float

returns the control time (absolute time managed by the high freq tracking controller)

get_tauExternal(self: robotic._robotic.BotOp) arr

get the current (real) robot joint torques (external: gravity & acceleration removed) – each call averages from last call; first call might return nonsense!

gripperClose(self: robotic._robotic.BotOp, leftRight: robotic._robotic.ArgWord, force: float = 10.0, width: float = 0.05, speed: float = 0.1) None

close gripper

gripperCloseGrasp(self: robotic._robotic.BotOp, leftRight: robotic._robotic.ArgWord, objName: str, force: float = 10.0, width: float = 0.05, speed: float = 0.1) None

close gripper and indicate what should be grasped – makes no difference in real, but helps simulation to mimic grasping more reliably

gripperDone(self: robotic._robotic.BotOp, leftRight: robotic._robotic.ArgWord) bool

returns if gripper is done

gripperMove(self: robotic._robotic.BotOp, leftRight: robotic._robotic.ArgWord, width: float = 0.075, speed: float = 0.2) None

move the gripper to width (default: open)

hold(self: robotic._robotic.BotOp, floating: bool = False, damping: bool = True) None

hold the robot with a trivial PD controller, floating means reference = real, without damping the robot is free floating

home(self: robotic._robotic.BotOp, C: robotic._robotic.Config) None

immediately drive the robot home (see get_qHome); keeps argument C synced; same as moveTo(qHome, 1., True); wait(C);

move(self: robotic._robotic.BotOp, path: arr, times: arr, overwrite: bool = False, overwriteCtrlTime: float = -1.0) None

core motion command: set a spline motion reference; if only a single time [T] is given for multiple waypoints, it assumes equal time spacing with TOTAL time T

By default, the given spline is APPENDED to the current reference spline. The user can also enforce the given spline to overwrite the current reference starting at the given absolute ctrlTime. This allows implementation of reactive (e.g. MPC-style) control. However, the user needs to take care that overwriting is done in a smooth way, i.e., that the given spline starts with a pos/vel that is close to the pos/vel of the current reference at the given ctrlTime.

moveAutoTimed(self: robotic._robotic.BotOp, path: arr, maxVel: float = 1.0, maxAcc: float = 1.0) None

helper to execute a path (typically fine resolution, from KOMO or RRT) with equal time spacing chosen for given max vel/acc

moveTo(self: robotic._robotic.BotOp, q_target: arr, timeCost: float = 1.0, overwrite: bool = False) None

helper to move to a single joint vector target, where timing is chosen optimally based on the given timing cost

When using overwrite, this immediately steers to the target – use this as a well-timed reactive q_target controller

setCompliance(self: robotic._robotic.BotOp, J: arr, compliance: float = 0.5) None

set a task space compliant, where J defines the task space Jacobian, and compliance goes from 0 (no compliance) to 1 (full compliance, but still some damping)

setControllerWriteData(self: robotic._robotic.BotOp, arg0: int) None

[for internal debugging only] triggers writing control data into a file

stop(self: robotic._robotic.BotOp, C: robotic._robotic.Config) None

immediately stop the robot; keeps argument C synced; same as moveTo(get_q(), 1., True); wait(C);

sync(self: robotic._robotic.BotOp, C: robotic._robotic.Config, waitTime: float = 0.1, viewMsg: rai::String = '') int

sync your workspace configuration C with the robot state

wait(self: robotic._robotic.BotOp, C: robotic._robotic.Config, forKeyPressed: bool = True, forTimeToEnd: bool = True, forGripper: bool = False) int

repeatedly sync your workspace C until a key is pressed or motion ends (optionally)

class robotic._robotic.CameraView

Offscreen rendering

computeImageAndDepth(self: robotic._robotic.CameraView, config: robotic._robotic.Config, visualsOnly: bool = True) tuple

returns image and depth from a camera sensor; the ‘config’ argument needs to be the same configuration as in the constructor, but in new state

computeSegmentationID(self: robotic._robotic.CameraView) Array<T>

return a uint16 array with object ID segmentation

computeSegmentationImage(self: robotic._robotic.CameraView) Array<T>

return an rgb image encoding the object ID segmentation

getFxycxy(self: robotic._robotic.CameraView) arr

return the camera intrinsics f_x, f_y, c_x, c_y

setCamera(self: robotic._robotic.CameraView, cameraFrameName: str) rai::CameraView::Sensor

select a camera, typically a frame that has camera info attributes

class robotic._robotic.Config

Core data structure to represent a kinematic configuration (essentially a tree of frames). See https://marctoussaint.github.io/robotics-course/tutorials/1a-configurations.html

addConfigurationCopy(self: robotic._robotic.Config, config: robotic._robotic.Config, prefix: rai::String = '', tau: float = 1.0) robotic._robotic.Frame
addFile(self: robotic._robotic.Config, filename: str, namePrefix: str = '') robotic._robotic.Frame

add the contents of the file to C

addFrame(self: robotic._robotic.Config, name: str, parent: str = '', args: str = '') robotic._robotic.Frame

add a new frame to C; optionally make this a child to the given parent; use the Frame methods to set properties of the new frame

animate(self: robotic._robotic.Config) None

displays while articulating all dofs in a row

animateSpline(self: robotic._robotic.Config, T: int = 3) None

animate with random spline in limits bounding box [T=#spline points]

attach(self: robotic._robotic.Config, arg0: str, arg1: str) None

change the configuration by creating a rigid joint from frame1 to frame2, adopting their current relative pose. This also breaks the first joint that is parental to frame2 and reverses the topological order from frame2 to the broken joint

clear(self: robotic._robotic.Config) None

clear all frames and additional data; becomes the empty configuration, with no frames

computeCollisions(self: robotic._robotic.Config) None

[should be obsolete; getCollision* methods auto ensure proxies] call the broadphase collision engine (SWIFT++ or FCL) to generate the list of collisions (or near proximities) between all frame shapes that have the collision tag set non-zero

delFrame(self: robotic._robotic.Config, frameName: str) None

destroy and remove a frame from C

eval(self: robotic._robotic.Config, featureSymbol: robotic._robotic.FS, frames: StringA = [], scale: arr = array(0.0078125), target: arr = array(0.0078125), order: int = -1) tuple

evaluate a feature – see https://marctoussaint.github.io/robotics-course/tutorials/features.html

frame(self: robotic._robotic.Config, frameName: str, warnIfNotExist: bool = True) robotic._robotic.Frame

get access to a frame by name; use the Frame methods to set/get frame properties

frames(self: robotic._robotic.Config) list[robotic._robotic.Frame]
getCollidablePairs(self: robotic._robotic.Config) StringA

returns the list of collisable pairs – this should help debugging the ‘contact’ flag settings in a configuration

getCollisionFree(self: robotic._robotic.Config) bool

returns if the configuration is collision free (binary collision check, using FCL only; collidable objects need to have contact flag)

getCollisions(self: robotic._robotic.Config, belowMargin: float = 1.0) list

return the results of collision computations: a list of 3 tuples with (frame1, frame2, distance). Optionally report only on distances below a margin To get really precise distances and penetrations use the FS.distance feature with the two frame names

getCollisionsTotalPenetration(self: robotic._robotic.Config) float

returns the sum of all penetrations (using FCL for broadphase; and low-level GJK/MRP for fine pair-wise distance/penetration computation)

getFrame(self: robotic._robotic.Config, frameName: str, warnIfNotExist: bool = True) robotic._robotic.Frame

get access to a frame by name; use the Frame methods to set/get frame properties

getFrameDimension(self: robotic._robotic.Config) int

get the total number of frames

getFrameNames(self: robotic._robotic.Config) list[str]

get the list of frame names

getFrameState(self: robotic._robotic.Config) numpy.ndarray[numpy.float64]

get the frame state as a n-times-7 numpy matrix, with a 7D pose per frame

getJointDimension(self: robotic._robotic.Config) int

get the total number of degrees of freedom

getJointLimits(self: robotic._robotic.Config) arr

get the joint limits as a n-by-2 matrix; for dofs that do not have limits defined, the entries are [0,-1] (i.e. upper limit < lower limit)

getJointNames(self: robotic._robotic.Config) StringA

get the list of joint names

getJointState(self: robotic._robotic.Config) arr

get the joint state as a numpy vector, optionally only for a subset of joints specified as list of joint names

report(self: robotic._robotic.Config) str

return a string with basic info (#frames, etc)

selectJoints(self: robotic._robotic.Config, jointNames: list[str], notThose: bool = False) None

redefine what are considered the DOFs of this configuration: only joints listed in jointNames are considered part of the joint state and define the number of DOFs

setFrameState(*args, **kwargs)

Overloaded function.

  1. setFrameState(self: robotic._robotic.Config, X: list[float], frames: list[str] = []) -> None

set the frame state, optionally only for a subset of frames specified as list of frame names

  1. setFrameState(self: robotic._robotic.Config, X: numpy.ndarray, frames: list[str] = []) -> None

set the frame state, optionally only for a subset of frames specified as list of frame names

setJointState(self: robotic._robotic.Config, q: arr, joints: list = []) None

set the joint state, optionally only for a subset of joints specified as list of joint names

setJointStateSlice(self: robotic._robotic.Config, arg0: list[float], arg1: int) None
view(self: robotic._robotic.Config, pause: bool = False, message: str = None) int

open a view window for the configuration

view_close(self: robotic._robotic.Config) None

close the view

view_focalLength(self: robotic._robotic.Config) float

return the focal length of the view camera (only intrinsic parameter)

view_fxycxy(self: robotic._robotic.Config) arr

return (fx, fy, cx, cy): the focal length and image center in PIXEL UNITS

view_getDepth(self: robotic._robotic.Config) numpy.ndarray[numpy.float32]
view_getRgb(self: robotic._robotic.Config) numpy.ndarray[numpy.uint8]
view_pose(self: robotic._robotic.Config) arr

return the 7D pose of the view camera

view_raise(self: robotic._robotic.Config) None

raise the view

view_recopyMeshes(self: robotic._robotic.Config) None
view_savePng(self: robotic._robotic.Config, pathPrefix: str = 'z.vid/') None

saves a png image of the current view, numbered with a global counter, with the intention to make a video

view_setCamera(self: robotic._robotic.Config, arg0: robotic._robotic.Frame) None

set the camera pose to a frame, and check frame attributes for intrinsic parameters (focalLength, width height)

watchFile(self: robotic._robotic.Config, arg0: str) None

launch a viewer that listents (inode) to changes of a file (made by you in an editor), and reloads, displays and animates the configuration whenever the file is changed

write(self: robotic._robotic.Config) str

write the full configuration in a string (roughly yaml), e.g. for file export

writeCollada(self: robotic._robotic.Config, filename: str, format: str = 'collada') None

write the full configuration in a collada file for export

writeMesh(self: robotic._robotic.Config, filename: str) None

write the full configuration in a ply mesh file

writeMeshes(self: robotic._robotic.Config, pathPrefix: str) None

write all object meshes in a directory

writeURDF(self: robotic._robotic.Config) str

write the full configuration as URDF in a string, e.g. for file export

class robotic._robotic.ControlMode

Members:

none

position

velocity

acceleration

spline

property name
class robotic._robotic.FS

Members:

position

positionDiff

positionRel

quaternion

quaternionDiff

quaternionRel

pose

poseDiff

poseRel

vectorX

vectorXDiff

vectorXRel

vectorY

vectorYDiff

vectorYRel

vectorZ

vectorZDiff

vectorZRel

scalarProductXX

scalarProductXY

scalarProductXZ

scalarProductYX

scalarProductYY

scalarProductYZ

scalarProductZZ

gazeAt

angularVel

accumulatedCollisions

jointLimits

distance

negDistance

oppose

qItself

jointState

aboveBox

insideBox

pairCollision_negScalar

pairCollision_vector

pairCollision_normal

pairCollision_p1

pairCollision_p2

standingAbove

physics

contactConstraints

energy

transAccelerations

transVelocities

property name
class robotic._robotic.Frame

A (coordinate) frame of a configuration, which can have a parent, and associated shape, joint, and/or inertia

addAttributes(self: robotic._robotic.Frame, arg0: dict) None

add/set attributes for the frame

getAttributes(self: robotic._robotic.Frame) dict

get frame attributes

getJointState(self: robotic._robotic.Frame) arr
getMesh(self: robotic._robotic.Frame) tuple
getMeshPoints(self: robotic._robotic.Frame) arr
getMeshTriangles(self: robotic._robotic.Frame) uintA
getPosition(self: robotic._robotic.Frame) arr
getQuaternion(self: robotic._robotic.Frame) arr
getRelativePosition(self: robotic._robotic.Frame) arr
getRelativeQuaternion(self: robotic._robotic.Frame) arr
getRelativeTransform(self: robotic._robotic.Frame) arr
getRotationMatrix(self: robotic._robotic.Frame) arr
getSize(self: robotic._robotic.Frame) arr
getTransform(self: robotic._robotic.Frame) arr
info(self: robotic._robotic.Frame) dict
setAttribute(self: robotic._robotic.Frame, arg0: str, arg1: float) robotic._robotic.Frame
setColor(self: robotic._robotic.Frame, arg0: arr) robotic._robotic.Frame
setContact(self: robotic._robotic.Frame, arg0: int) robotic._robotic.Frame
setConvexMesh(self: robotic._robotic.Frame, points: arr, colors: Array<T> = array(1, dtype=uint8), radius: float = 0.0) robotic._robotic.Frame

attach a convex mesh as shape

setDensity(self: robotic._robotic.Frame, data: Array<T>, size: arr) robotic._robotic.Frame
setImplicitSurface(self: robotic._robotic.Frame, data: Array<T>, size: arr, blur: int, resample: float = -1.0) robotic._robotic.Frame
setJoint(self: robotic._robotic.Frame, jointType: robotic._robotic.JT, limits: arr = array(0.0078125)) robotic._robotic.Frame
setJointState(self: robotic._robotic.Frame, arg0: arr) robotic._robotic.Frame
setMass(self: robotic._robotic.Frame, arg0: float) robotic._robotic.Frame
setMesh(self: robotic._robotic.Frame, vertices: arr, triangles: uintA, colors: Array<T> = array(1, dtype=uint8), cvxParts: uintA = array(0, dtype=uint32)) robotic._robotic.Frame

attach a mesh shape

setMeshAsLines(self: robotic._robotic.Frame, arg0: list[float]) None
setParent(self: robotic._robotic.Frame, parent: robotic._robotic.Frame, keepAbsolutePose_and_adaptRelativePose: bool = False, checkForLoop: bool = False) robotic._robotic.Frame
setPointCloud(self: robotic._robotic.Frame, points: arr, colors: Array<T> = array(1, dtype=uint8), normals: arr = array(0.0078125)) robotic._robotic.Frame

attach a point cloud shape

setPose(self: robotic._robotic.Frame, arg0: str) None
setPosition(self: robotic._robotic.Frame, arg0: arr) robotic._robotic.Frame
setQuaternion(self: robotic._robotic.Frame, arg0: arr) robotic._robotic.Frame
setRelativePose(self: robotic._robotic.Frame, arg0: str) None
setRelativePosition(self: robotic._robotic.Frame, arg0: arr) robotic._robotic.Frame
setRelativeQuaternion(self: robotic._robotic.Frame, arg0: arr) robotic._robotic.Frame
setShape(self: robotic._robotic.Frame, type: robotic._robotic.ST, size: arr) robotic._robotic.Frame
class robotic._robotic.JT

Members:

hingeX

hingeY

hingeZ

transX

transY

transZ

transXY

trans3

transXYPhi

transYPhi

universal

rigid

quatBall

phiTransXY

XBall

free

generic

tau

property name
class robotic._robotic.KOMO

A framework to define manipulation problems (IK, path optimization, sequential manipulation) as Nonlinear Mathematical Program (NLP). The actual NLP_Solver class is separate. (KOMO = k-order Markov Optimization) – see https://marctoussaint.github.io/robotics-course/tutorials/1c-komo.html

addControlObjective(self: robotic._robotic.KOMO, times: arr, order: int, scale: float = 1.0, target: arr = array(0.0078125), deltaFromSlice: int = 0, deltaToSlice: int = 0) Objective
  • times: (as for addObjective) the phase-interval in which this objective holds; [] means all times

  • order: Do we penalize the jointState directly (order=0: penalizing sqr distance to qHome, order=1: penalizing sqr distances between consecutive configurations (velocities), order=2: penalizing accelerations across 3 configurations)

  • scale: as usual, but modulated by a factor ‘sqrt(delta t)’ that somehow ensures total control costs in approximately independent of the choice of stepsPerPhase

addModeSwitch(self: robotic._robotic.KOMO, times: arr, newMode: robotic._robotic.SY, frames: StringA, firstSwitch: bool = True) None
addObjective(self: robotic._robotic.KOMO, times: arr, feature: robotic._robotic.FS, frames: StringA, type: ObjectiveType, scale: arr = array(0.0078125), target: arr = array(0.0078125), order: int = -1) None

central method to define objectives in the KOMO NLP: * times: the time intervals (subset of configurations in a path) over which this feature is active (irrelevant for IK) * feature: the feature symbol (see advanced Feature tutorial) * frames: the frames for which the feature is computed, given as list of frame names * type: whether this is a sum-of-squares (sos) cost, or eq or ineq constraint * scale: the matrix(!) by which the feature is multiplied * target: the offset which is substracted from the feature (before scaling)

addQuaternionNorms(self: robotic._robotic.KOMO, times: arr = array(0.0078125), scale: float = 3.0, hard: bool = True) None
addStableFrame(self: robotic._robotic.KOMO, name: str, parent: str, jointType: robotic._robotic.JT, stable: bool, initFrame: str = 0) robotic._robotic.Frame

complicated…

addTimeOptimization(self: robotic._robotic.KOMO) None
clearObjectives(self: robotic._robotic.KOMO) None
getFeatureNames(self: robotic._robotic.KOMO) StringA

(This is to be passed to the NLP_Solver when needed.) returns a long list of features (per time slice!)

getForceInteractions(self: robotic._robotic.KOMO) list
getFrameState(self: robotic._robotic.KOMO, arg0: int) arr
getPath(self: robotic._robotic.KOMO) arr
getPathFrames(self: robotic._robotic.KOMO) arr
getPathTau(self: robotic._robotic.KOMO) arr
getPath_qAll(self: robotic._robotic.KOMO) arrA
getSubProblem(self: robotic._robotic.KOMO, phase: int) tuple

return a tuple of (configuration, start q0, end q1) for given phase of this komo problem

getT(self: robotic._robotic.KOMO) int
info_objectiveErrorTraces(self: robotic._robotic.KOMO) arr

return a TxO, for O objectives

info_objectiveNames(self: robotic._robotic.KOMO) StringA

return a array of O strings, for O objectives

info_sliceCollisions(self: robotic._robotic.KOMO, t: int, belowMargin: float) rai::String

return string info of collosions belowMargin in slice t

info_sliceErrors(self: robotic._robotic.KOMO, t: int, errorTraces: arr) rai::String

return string info of objectives and errors in slice t – needs errorTraces as input

initOrg(self: robotic._robotic.KOMO) None
initPhaseWithDofsPath(self: robotic._robotic.KOMO, t_phase: int, dofIDs: uintA, path: arr, autoResamplePath: bool = False) None
initRandom(self: robotic._robotic.KOMO, verbose: int = 0) None
initWithConstant(self: robotic._robotic.KOMO, q: arr) None
initWithPath(self: robotic._robotic.KOMO, q: arr) None
initWithWaypoints(self: robotic._robotic.KOMO, waypoints: arrA, waypointSlicesPerPhase: int = 1, interpolate: bool = False, qHomeInterpolate: float = 0.0, verbose: int = -1) uintA
nlp(self: robotic._robotic.KOMO) NLP

return the problem NLP

report(self: robotic._robotic.KOMO, specs: bool = False, listObjectives: bool = True, plotOverTime: bool = False) rai::Graph

returns a dict with full list of features, optionally also on problem specs and plotting costs/violations over time

setConfig(self: robotic._robotic.KOMO, config: robotic._robotic.Config, enableCollisions: bool) None

[deprecated] please set directly in constructor

setTiming(self: robotic._robotic.KOMO, phases: float, slicesPerPhase: int, durationPerPhase: float, kOrder: int) None

[deprecated] please set directly in constructor

updateRootObjects(self: robotic._robotic.KOMO, config: robotic._robotic.Config) None

update root frames (without parents) within all KOMO configurations

view(self: robotic._robotic.KOMO, pause: bool = False, txt: str = None) int
view_close(self: robotic._robotic.KOMO) None
view_play(self: robotic._robotic.KOMO, pause: bool = False, delay: float = 0.1, saveVideoPath: str = None) bool
view_slice(self: robotic._robotic.KOMO, t: int, pause: bool = False) int
class robotic._robotic.NLP

Representation of a Nonlinear Mathematical Program

checkHessian(self: robotic._robotic.NLP, x: arr, tolerance: float) bool
checkJacobian(self: robotic._robotic.NLP, x: arr, tolerance: float, featureNames: StringA = []) bool
evaluate(self: robotic._robotic.NLP, arg0: arr) tuple[arr, arr]

query the NLP at a point $x$; returns the tuple $(phi,J)$, which is the feature vector and its Jacobian; features define cost terms, sum-of-square (sos) terms, inequalities, and equalities depending on ‘getFeatureTypes’

getBounds(self: robotic._robotic.NLP) arr

returns the tuple $(b_{lo},b_{up})$, where both vectors are of same dimensionality of $x$ (or size zero, if there are no bounds)

getDimension(self: robotic._robotic.NLP) int

return the dimensionality of $x$

getFHessian(self: robotic._robotic.NLP, arg0: arr) arr

returns Hessian of the sum of $f$-terms

getFeatureTypes(self: robotic._robotic.NLP) list[ObjectiveType]

features (entries of $phi$) can be of one of (ry.OT.f, ry.OT.sos, ry.OT.ineq, ry.OT.eq), which means (cost, sum-of-square, inequality, equality). The total cost $f(x)$ is the sum of all f-terms plus sum-of-squares of sos-terms.

getInitializationSample(self: robotic._robotic.NLP, previousOptima: arr = array(0.0078125)) arr

returns a sample (e.g. uniform within bounds) to initialize an optimization – not necessarily feasible

report(self: robotic._robotic.NLP, arg0: int) str

displays semantic information on the last query

class robotic._robotic.NLP_Factory
setBounds(self: robotic._robotic.NLP_Factory, arg0: arr, arg1: arr) None
setDimension(self: robotic._robotic.NLP_Factory, arg0: int) None
setEvalCallback(self: robotic._robotic.NLP_Factory, arg0: Callable[[arr], tuple[arr, arr]]) None
setFeatureTypes(self: robotic._robotic.NLP_Factory, arg0: Array<T>) None
testCallingEvalCallback(self: robotic._robotic.NLP_Factory, arg0: arr) tuple[arr, arr]
class robotic._robotic.NLP_Solver

An interface to portfolio of solvers

getOptions(self: robotic._robotic.NLP_Solver) robotic._robotic.NLP_SolverOptions
getProblem(self: robotic._robotic.NLP_Solver) robotic._robotic.NLP
getTrace_J(self: robotic._robotic.NLP_Solver) arr
getTrace_costs(self: robotic._robotic.NLP_Solver) arr

returns steps-times-3 array with rows (f+sos-costs, ineq, eq)

getTrace_phi(self: robotic._robotic.NLP_Solver) arr
getTrace_x(self: robotic._robotic.NLP_Solver) arr

returns steps-times-n array with queries points in each row

reportLagrangeGradients(self: robotic._robotic.NLP_Solver, list of names for each objective: StringA = []) dict

return dictionary of Lagrange gradients per objective

setOptions(self: robotic._robotic.NLP_Solver, verbose: int = 1, stopTolerance: float = 0.01, stopFTolerance: float = -1.0, stopGTolerance: float = -1.0, stopEvals: int = 1000, maxStep: float = 0.2, damping: float = 1.0, stepInc: float = 1.5, stepDec: float = 0.5, wolfe: float = 0.01, muInit: float = 1.0, muInc: float = 5.0, muMax: float = 10000.0, muLBInit: float = 0.1, muLBDec: float = 0.2, maxLambda: float = -1.0) robotic._robotic.NLP_Solver

set solver options

setProblem(self: robotic._robotic.NLP_Solver, arg0: robotic._robotic.NLP) robotic._robotic.NLP_Solver
setPyProblem(self: robotic._robotic.NLP_Solver, arg0: object) None
setSolver(self: robotic._robotic.NLP_Solver, arg0: NLP_SolverID) robotic._robotic.NLP_Solver
setTracing(self: robotic._robotic.NLP_Solver, arg0: bool, arg1: bool, arg2: bool, arg3: bool) robotic._robotic.NLP_Solver
solve(self: robotic._robotic.NLP_Solver, resampleInitialization: int = -1) SolverReturn
class robotic._robotic.NLP_SolverID

Members:

gradientDescent

rprop

LBFGS

newton

augmentedLag

squaredPenalty

logBarrier

singleSquaredPenalty

NLopt

Ipopt

Ceres

property name
class robotic._robotic.NLP_SolverOptions

solver options

dict(self: robotic._robotic.NLP_SolverOptions) dict
set_damping(self: robotic._robotic.NLP_SolverOptions, arg0: float) robotic._robotic.NLP_SolverOptions
set_maxLambda(self: robotic._robotic.NLP_SolverOptions, arg0: float) robotic._robotic.NLP_SolverOptions
set_maxStep(self: robotic._robotic.NLP_SolverOptions, arg0: float) robotic._robotic.NLP_SolverOptions
set_muInc(self: robotic._robotic.NLP_SolverOptions, arg0: float) robotic._robotic.NLP_SolverOptions
set_muInit(self: robotic._robotic.NLP_SolverOptions, arg0: float) robotic._robotic.NLP_SolverOptions
set_muLBDec(self: robotic._robotic.NLP_SolverOptions, arg0: float) robotic._robotic.NLP_SolverOptions
set_muLBInit(self: robotic._robotic.NLP_SolverOptions, arg0: float) robotic._robotic.NLP_SolverOptions
set_muMax(self: robotic._robotic.NLP_SolverOptions, arg0: float) robotic._robotic.NLP_SolverOptions
set_stepDec(self: robotic._robotic.NLP_SolverOptions, arg0: float) robotic._robotic.NLP_SolverOptions
set_stepInc(self: robotic._robotic.NLP_SolverOptions, arg0: float) robotic._robotic.NLP_SolverOptions
set_stopEvals(self: robotic._robotic.NLP_SolverOptions, arg0: int) robotic._robotic.NLP_SolverOptions
set_stopFTolerance(self: robotic._robotic.NLP_SolverOptions, arg0: float) robotic._robotic.NLP_SolverOptions
set_stopGTolerance(self: robotic._robotic.NLP_SolverOptions, arg0: float) robotic._robotic.NLP_SolverOptions
set_stopTolerance(self: robotic._robotic.NLP_SolverOptions, arg0: float) robotic._robotic.NLP_SolverOptions
set_verbose(self: robotic._robotic.NLP_SolverOptions, arg0: int) robotic._robotic.NLP_SolverOptions
set_wolfe(self: robotic._robotic.NLP_SolverOptions, arg0: float) robotic._robotic.NLP_SolverOptions
class robotic._robotic.OT

Members:

none

f

sos

ineq

eq

ineqB

ineqP

property name
class robotic._robotic.PathFinder

todo doc

setExplicitCollisionPairs(self: robotic._robotic.PathFinder, collisionPairs: StringA) None

only after setProblem

setProblem(self: robotic._robotic.PathFinder, Configuration: robotic._robotic.Config, starts: arr, goals: arr) None
solve(self: robotic._robotic.PathFinder) SolverReturn
class robotic._robotic.ST

Members:

none

box

sphere

capsule

mesh

cylinder

marker

pointCloud

ssCvx

ssBox

ssCylinder

ssBoxElip

quad

camera

sdf

property name
class robotic._robotic.SY

Members:

touch

above

inside

oppose

restingOn

poseEq

positionEq

stableRelPose

stablePose

stable

stableOn

dynamic

dynamicOn

dynamicTrans

quasiStatic

quasiStaticOn

downUp

break

stableZero

contact

contactStick

contactComplementary

bounce

push

magic

magicTrans

pushAndPlace

topBoxGrasp

topBoxPlace

dampMotion

identical

alignByInt

makeFree

forceBalance

relPosY

touchBoxNormalX

touchBoxNormalY

touchBoxNormalZ

boxGraspX

boxGraspY

boxGraspZ

lift

stableYPhi

stableOnX

stableOnY

end

property name
class robotic._robotic.Simulation

A direct simulation interface to physics engines (Nvidia PhysX, Bullet) – see https://marctoussaint.github.io/robotics-course/tutorials/simulation.html

addSensor(self: robotic._robotic.Simulation, sensorName: str, frameAttached: str = '', width: int = 640, height: int = 360, focalLength: float = -1.0, orthoAbsHeight: float = -1.0, zRange: arr = []) rai::CameraView::Sensor
attach(self: robotic._robotic.Simulation, gripper: robotic._robotic.Frame, obj: robotic._robotic.Frame) None
closeGripper(self: robotic._robotic.Simulation, gripperFrameName: str, width: float = 0.05, speed: float = 0.3, force: float = 20.0) None
depthData2pointCloud(self: robotic._robotic.Simulation, arg0: numpy.ndarray[numpy.float32], arg1: list[float]) numpy.ndarray[numpy.float64]
detach(self: robotic._robotic.Simulation, obj: robotic._robotic.Frame) None
getGripperIsGrasping(self: robotic._robotic.Simulation, gripperFrameName: str) bool
getGripperWidth(self: robotic._robotic.Simulation, gripperFrameName: str) float
getImageAndDepth(self: robotic._robotic.Simulation) tuple
getScreenshot(self: robotic._robotic.Simulation) Array<T>
getState(self: robotic._robotic.Simulation) tuple

returns a 4-tuple or frame state, joint state, frame velocities (linear & angular), joint velocities

getTimeToSplineEnd(self: robotic._robotic.Simulation) float
get_q(self: robotic._robotic.Simulation) arr
get_qDot(self: robotic._robotic.Simulation) arr
openGripper(self: robotic._robotic.Simulation, gripperFrameName: str, width: float = 0.075, speed: float = 0.3) None
pushConfigurationToSimulator(self: robotic._robotic.Simulation, frameVelocities: arr = array(0.0078125), jointVelocities: arr = array(0.0078125)) None

set the simulator to the full (frame) state of the configuration

resetSplineRef(self: robotic._robotic.Simulation) None

reset the spline reference, i.e., clear the current spline buffer and initialize it to constant spline at current position (to which setSplineRef can append)

selectSensor(self: robotic._robotic.Simulation, sensorName: str) rai::CameraView::Sensor
setSplineRef(self: robotic._robotic.Simulation, path: arr, times: arr, append: bool = True) None

set the spline reference to generate motion * path: single configuration, or sequence of spline control points * times: array with single total duration, or time for each control point (times.N==path.d0) * append: append (with zero-velocity at append), or smoothly overwrite

setState(self: robotic._robotic.Simulation, frameState: arr, jointState: arr = array(0.0078125), frameVelocities: arr = array(0.0078125), jointVelocities: arr = array(0.0078125)) None
step(self: robotic._robotic.Simulation, u_control: arr, tau: float = 0.01, u_mode: robotic._robotic.ControlMode = <ControlMode.velocity: 2>) None
class robotic._robotic.SimulationEngine

Members:

physx

bullet

kinematic

property name
class robotic._robotic.SolverReturn

return of nlp solve call

dict(self: robotic._robotic.SolverReturn) dict
robotic._robotic.compiled() str

return a compile date+time version string

robotic._robotic.depthImage2PointCloud(depth: numpy.ndarray[numpy.float32], fxycxy: arr) arr

return the point cloud from the depth image

robotic._robotic.params_add(python dictionary or params to add: dict) None

add/set parameters

robotic._robotic.params_clear() None

clear all parameters

robotic._robotic.params_file(filename: str) None

add parameters from a file

robotic._robotic.params_print() None

print the parameters

robotic._robotic.raiPath(arg0: str) rai::String

get a path relative to rai base path

robotic._robotic.setRaiPath(arg0: str) None

redefine the rai (or rai-robotModels) path