The Robotics Glossary: 18 Robotics Terms You Need to Know

Lacey Trebaol
Oct 7, 2019 4:06:32 PM

In every domain, there exists a list of terms that, when known, ease communication in a tangible way. Robotics is no exception to this rule. In fact, I’d argue that the robotics domain has so many important and niche terms, that it takes this to the next level. Whether you’re working in the domain, or in another area that sometimes touches on robotics, eventually, you’ll come across a few of these terms (and maybe wonder what they mean). So without further ado, here is Part 1 of the Energid Robotics Glossary.

Adaptive Motion Control: This term is used when motions are sequenced to perform a task that is defined relative to some object or feature. This object or feature can be repositioned and updated by a sensor in real time, and the robotic system can adapt or change its motion to accomplish the task. This is well suited to Actin because the joint velocities of the robots are calculated in real time, allowing the robot trajectory to change as the task is being performed.

 

CAD to Path: CAD to path is a method of converting a robot path created in a CAD environment into a robot program. CAD to path is ideal for achieving high accuracy in process applications such as deburring and dispensing where manually teaching robots can be time-consuming. Though CAD tools are convenient in programming process requirements, translating them to a robot program is generally not easy and there is no common format for conversion. Actin's CAD to path feature, when combined with its advanced motion planning capabilities, has enabled robots to be an alternative in unconventional applications.

 

Constraint: A robotic constraint is a restriction placed on a robotic system that narrows its achievable motion possibilities. Constraints can include both the mechanical constraints of the system, such as a joint that is at its limit and the constraints that are applied by the user, such as the desired position of the tool. While the term “constraint” often has a negative connotation, robotic constraints are necessary and allow the user to determine the motion of the system. For instance, when following a toolpath, respecting the surface normal vector and any desired execution offsets relative to that vector is usually critical to the robot task, so it’s necessary to constrain the robot link that is following the toolpath to match those “normal” vectors.

For more information, check out this post on the Energid Blog.

Cobot: The term cobot is an abbreviation of collaborative robot. Cobots are designed to safely collaborate with, or work alongside, humans. Through the addition of sensors, a robot that was not originally designed to work with humans can enable collaborative operation in manufacturing environments. This is in contrast to traditional industrial robots, designed to operate autonomously or with limited guidance, which is what most industrial robots were up until the decade of the 2010s.

The video below features a UR-5 (one of our favorite cobots!).

 

End Effector (also known as end-of-arm tooling, EE, or EOAT): End effector is a generic term that includes all the devices that can be installed at a robot wrist. The subsystem of a robotic system that links the manipulator to the part being handled or worked on, and gives the robot the ability to pick up, handle, or otherwise interact with parts in the environment. Examples include grippers and welding torches.

Kinematic Redundancy: Robots with more than six axes are considered kinematically redundant because they can achieve a particular end effector pose from multiple joint states. The video below shows kinematic redundancy with a 7-axis KUKA iiwa robot.

 

Generally, such a manipulator can place a hand or tool at a desired position and orientation (or pose) within its workspace in an unlimited number of ways. Similarly, a bifurcating (i.e., branching) two-handed robot with more than 12 degrees of freedom can place its two hands at specified poses within its workspace in an unlimited number of ways. Energid’s Actin toolkit focuses on just these types of kinematic problems.

Click here to learn more about kinematic redundancy.

Kinematic Control (in Actin): Actin is primarily a toolkit for the kinematic control of robotic manipulators. Using velocity control as the core technique, it calculates the joint rates or positions to give desired hand velocities or positions. All is done automatically, based only on the manipulator model description. This is the strength of the Actin toolkit—the ability to control almost any robotic manipulator using just its physical description. Manipulators with any number of links, any number of bifurcations (branches), nonstandard joint types, and nonstandard end-effector types are supported. To learn more, download this whitepaper.

Natural Tasking: Natural Tasking is a comprehensive approach to specifying robotic tasks easily, without over- or under-constraint. Natural Tasking leverages Actin’s flexible constraint-based robot kinematics, and pre-defined low-level motion scripts, and path planning. Motion Constraints (End Effector Constraints) define how a link on robot model is allowed to move. A link can be constrained up to 6 DOF on a link in space (using a Frame End Effector). Sometimes this is unnatural or over-constrained. Depending on the task, we can use other constraints. Using a robot arm to move a pen, we can use a 5 axis constraint (Free Spin in Z). Grasping a sphere, we can use a 3 DOF constraint (Point End Effector). These constraints can be relative to the system frame or a link frame.

 

Click here for a more detailed description of natural tasking (and lots of examples!).

Offline Programming: A file of fixed control code documenting time-sequenced joint angles which are then typically compiled in a robot’s native language and uploaded to the robot-specific controller for playback.

Offline Tasking: A file of sequenced End Effector (EE) goal locations that are achieved either by a Rapidly Randomizing Tree (RRT) algorithm or by streamed joint angles that are optimized at every timestep based on known kinematics and sensed environmental changes.

Online Programming: Constant streaming of joint angles from a control system embedded on the controller (or a coprocessor) and sent through the controller or directly to individual servos to adaptively achieve a series of sequenced EE goal locations regardless of goal motion relative to the robot base or system origin.

Optimization: A robotic optimization is the way an underconstrained (kinematically redundant) robotic system determines how to execute its commanded motion. This is accomplished by trying to find the minimum value of a mathematical combination of the joint motions and other factors that have been selected to be included in the optimization. These other factors might include the distance to joint actuator limits, the distance to collision, or the energy required to actuate the system.

For more information, check out this post on the Energid Blog.

Path Planning: When programming a robot to perform a task, it is very often the case that the robot motions involved must not cause the robot to collide with itself, its environment, its tooling and/or payload, or other robots. Robot programmers can either manually teach the robot trajectories/waypoints which move the robot and its end-effector to its goal around obstacles, or they can use a path planning algorithm. Robot path planning is used to find a valid sequence of motions to move a robotic manipulator’s end effector from where it is at the start of its motion, to where it needs to be at the end of its motion.

To learn more, check out this post where we discuss two categories of robot arm path planning that Actin supports (global path planning and local path planning), and why a roboticist might choose one over another.

Remote TCP: A Remote Tool Center Point (Remote TCP or RTCP) is a tool center point fixed in space relative to the robot base. A Remote TCP is used when the application requires the robot to grasp a part and move it at a constant speed relative to a tool fixed in the work cell. Common applications include dispensing, deburring, polishing, grinding, and sewing.

 

Tasking: Tasking is a way to sequence a series robot motions to perform a task. Generally tasking involves sequencing motions (joint + end effector) to move to and from home positions and part positions. Usually end effectors are fully constrained 6 degrees of freedom ( X, Y, Z, Roll, Pitch, Yaw ). Usually constrained in system frame, tool frame, or “part” frame. End effector TCP or tool offsets are typically configured based on tooling geometry only.

Tasking Framework (in Actin): Tasking a robot is difficult, and this difficulty is increased by tasking robots in an unnatural, over constrained way. Natural tasking is a comprehensive approach to specifying robotic tasks easily without over- or under-constraint. Natural tasking allows you to take advantage of natural symmetry of your tooling and parts, which in turn simplifies programming. Natural tasking allows you to define tasks at a higher level, relative to objects or features on objects (independent of object pose). Tool offsets (TCP) can be generated from existing frames, including tooling and payload geometry and state.

Natural Tasking leverages Actin’s flexible constraint based kinematics. Motion Constraints (End Effector Constraints) define how we wish the link to move. We can constrain up to 6 DOF on a link in space (using a Frame End Effector). Using an arm to move a pen, we can use a 5 axis constraint (Free Spin in Z). Grasping a sphere, we can use a 3 DOF constraint (Point End Effector). These constraints can be relative to the system frame or a link frame.

Tool path: A series of 6 DOF goal frames relative to a rigid body, robot base, or system origin that are interpolated in the desired direction to produce hardware motion. Tool paths are typically represented using a standard Numerical Control (NC) format such as G-Code. In Actin a tool path and any related angular or distance execution offsets are examples of system constraints used together with the remaining (even redundant) kinematics and additional system optimizations to achieve ideal hardware motion.

Visual Servoing: Visual servoing is a method of robot control that uses visually sensed data and a system model as inputs to pose estimation and kinematic control algorithms that determine coordinated joint angles which are then streamed to individual servos at a given time step producing robot motion that attempts to achieve the user-defined goal which may itself be mobile.

Are there terms you’d like to see added to the glossary? Do you have any questions about what’s included in this first set? Leave us a comment below as we'll be creating a larger glossary on our website and want to be sure it’s as comprehensive and helpful as possible.

Free Whitepaper: Leveraging High-Fidelity Simulation to Evaluate Autonomy Algorithm Safety

Leveraging Hi-Fidelity Simulation to Evaluate Autonomy Algorithm Safety-079396-edited

The last barrier is to prove to decision-makers (and the general public) that these autonomous systems are safe.

This paper describes a rigorous safety testing environment for large autonomous vehicles. Our approach to this borrows elements from game theory, where multiple competing players each attempt to maximize their payout. With this construct, we can model an environment that has an agent that seeks poor performance in an effort to find the rare corner cases that can lead to automation failure.

Get the Whitepaper

Subscribe by Email

No Comments Yet

Let us know what you think