A Path for Realistic Human-Robot Collaboration—Part 1

By Alberto Moel, Vice President Strategy and Partnerships, and Scott Denenberg, co-founder and Chief Architect at Veo Robotics

We’ve all seen those “futuristic” CGI videos1 of (usually humanoid) manufacturing robots fluidly interacting with people in a joyous collaborative dance. We look forward to the day when this dream is realized, but, as things stand today, we have a long way to go. In this new series of blog posts, we explore what human-robot collaboration really looks like, and how it could be improved within the constraints of the real world’s technology capabilities and manufacturing economics.

In reality, manufacturing robots, for all their positive attributes of speed, brawn, reliability, and precision, have no “native” sensing capability. If you want your robot to “feel” or “see” anything, you must outfit it or its environment with external sensors that generate signals that are fed into the robot control hardware.

For example, if you want your robot to pick up a part, you must program its trajectory and actions to specifically pick up that part. And if you want it to “know” that it has picked up the part (or not), you need to add sensing capabilities to the robotic system so that some (usually electrical) signal is generated to “tell” the robot that the part has been picked up. Every possible external sensing and activity must be explicitly programmed.

The stakes get tremendously higher when the “part” the robot must sense is a human in its vicinity. Strict safety standards2 mean that the reliability and performance level of any system that is meant to report human presence (or absence) to a robot has to be extraordinary.3

Such a sensing system is not supposed to fail, ever, but if it did, the failure modes would have to be known in advance, and it would have to fail in a way that the failure itself would not cause injury. As a result, designing robotic applications where humans are involved usually requires all kinds of systems and fixturing to make sure humans remain safe in the presence of the robot.

rebecca gatto-VEO-2018-4612.jpg

So most of the time, manufacturing engineers choose to avoid all of these complications and simply keep the humans and robots away from each other using cages or other guarding methods (such as light curtains), eliminating the opportunity for any form of human-machine collaboration. This greatly reduces the potential applications of robotics—manufacturing steps can either be completed by a human or via automation, but not both. These precautions, while necessary, lead to inefficient practices and, in some cases where humans are forced to do work better suited for robots, can result in repetitive stress injuries or strains from heavy lifting. Such is our current reality.

But the times they are a changin’…

Current manufacturing trends toward mass customization and shorter product cycles mean that manufacturers cannot really “get away” with this separation of human and machine anymore. Mass customization means more and more fixturing and setup in the robotic workcell in order to deal with all the product variety, and shorter product cycles mean more frequent workcell changeovers and redesigns. These two trends make it very difficult to economically amortize the costs of fully automated workcells.

As these end-market trends continue, the industry has gingerly moved toward allowing robots and humans to safely be in the same space while the robot is operating. From the industry side, robot manufacturers and automation providers have, over the last few years, introduced simple but reliable safety sensors and interfaces that allow some level of human-robot collaboration. These systems can safely stop or slow down the robot when signaled, for example, by a human triggering an area scanner. These systems also allow for dynamic constraints on the speed or reach of the robot in certain software-defined physical zones.4

Industry standards bodies have also tackled these issues. The most recent core standards that define how safe human-robot collaboration is to develop are ISO 10218 and ISO/TS 10566, which define collaborative applications using Power and Force Limiting (PFL) and Speed and Separation Monitoring (SSM) approaches.

Into the weeds a bit: PFL and SSM

Power and force limited robots, designed for installation in production settings, can “collaborate” with humans because they stop on human contact. That’s right, a PFL robot is necessarily equipped with tactile sensing (for example, a load cell or a capacitive sensor) that allows it to “know” and come to a stop when it touches something it’s not supposed to. The PFL standards require not only that the robot must stop on contact, but also that it must not hurt whatever it contacted. Humans are soft and squishy, bruise easily, and don’t like to be hit by robots, at any speed. Hence, to limit potential injury, PFL robots have to move very slowly and can only work with limited payloads.

Robot makers have picked up on the fact that PFL robots, however, ahem, limited, are likely to be accepted by the market. So far, they’ve been right. Introduced as “collaborative robots,” PFL robots have gained popularity and shown that collaborative applications can raise productivity, provide faster fault recovery, and increase unit production rates.5

But even though PFL robots perform well for certain applications, they are too weak and slow for most durable goods manufacturing, limiting their market penetration. In addition, they are no longer safe and can’t be used collaboratively if equipped with a dangerous end effector or payload, or in an application where other sources of risk are present (for example, when tending to a dangerous machine such as a press or an injection molder.)6

The alternative to PFL collaborative robotics is SSM, which is described in ISO/TS 15066 section 5.5.4:

In this method of operation, the robot system and operator may move concurrently in the collaborative workspace. Risk reduction is achieved by maintaining at least the Protective Separation Distance [PSD] between operator and robot at all times. During robot motion, the robot system never gets closer to the operator than the Protective Separation Distance. When the separation distance decreases to a value below the Protective Separation Distance, the robot system stops. When the operator moves away from the robot system, the robot system can resume motion automatically according to the requirements of this clause while maintaining at least the Protective Separation Distance. When the robot system reduces its speed, the Protective Separation Distance decreases correspondingly.

In plain English, with SSM, the robot never touches the human operator in its work envelope while moving, and the safety mode moves from “don’t hurt me if you hit me” to “avoid ever touching me while you’re moving.” Collaborative applications using SSM have different and less onerous limitations on robot speed and payload as well as end effector design, requiring that the robot and workcell default to a safe state when the Protective Separation Distance is violated.

A machine vision-based implementation of SSM (which we are developing at Veo) provides a way to overcome PFL limitations, making large industrial robots aware of humans and opening tremendous new opportunities for human-robot collaboration.

rebecca gatto-VEO-2018--18.jpg

We’re almost there…

SSM remains in its early stages and many of the elements required for fluid human-robot collaboration using SSM remain works in progress. For example, current sensing technologies rely mostly on two-dimensional sensors (i.e., laser scanners), which do not provide the richness of data required to implement dynamic SSM. At Veo Robotics, we’re developing a safety-rated dedicated hardware and software system for 3D sensing to implement dynamic SSM. Other areas where practical implementations of SSM are lacking include robot stopping distance calculations and characterization, and robot control latencies. We will take up these important elements in subsequent blog posts.


1 Likely in a demo of some Ph.D. student’s thesis. We were both Ph.D. students once, and when in that position, your objective is to make something work one time so you can get your ticket punched. Going from “cool demo that works once” to installing a fully operational and production-ready robot in a manufacturing line is a long road. It took 10 years for the first proof of concept in Alberto’s Ph.D. thesis to enter commercial use—and it didn’t even involve human safety.

2 And we mean strict, in both their description and their application. Since 1984, over 300,000 robots have been in operation in the US and OSHA reports just 14 deaths due to robots (and 42 industrial accidents total) over the course of those 35 years, and one fatality since 2016. One would expect the actual number of accidents to be higher as sometimes stuff can go unreported, given the consequences, but compare that to the 37,000+ automobile-related deaths (and many multiples of that in injuries) NHTSA reported just in 2016. There are roughly 250 million vehicles in operation in the US, and, if you do the math, you’re about 150 times more likely to be killed by a car than by a robot.

3 And usually extraordinarily expensive. For example, a safety-rated 2D LIDAR sensor can cost two or three times as much as the exact same piece of hardware without a safety rating.

4 These include Dual Check Safety (DCS) from FANUC, SafeMove from ABB, Yaskawa’s Functional Safety Unit (FSU), and KUKA’s Safe.Operation.

5 Collaborative robots have been the fastest growing segment of the robot market, with IFR estimating 23% YoY growth between 2017 and 2018. However, the number of units installed is still very low—out of more than 422,000 industrial robots installed in 2018, less than 14,000 were collaborative robots. These statistics are also well below forecasts by market participants.

6 This is a crucial point that is made very clear in the standards. From ISO 10218-1 section 5.10.4:

The robot is simply a component in a final collaborative robot system and is not in itself sufficient for a safe collaborative operation. The collaborative operation applications are dynamic and shall be determined by the risk assessment performed during the application system design.