The communication and collaboration between humans and robots is one of main principles of the fourth industrial revolution (Industry 4.0). In the next years, robots and humans will become co-workers, sharing the same working space and helping each other. A robot intended for collaboration with humans has to be equipped with safety components, which are different from the standard ones (cages, laser scans, etc.).

In this project, a safety system for applications of human-robot collaboration has been developed. The system is able to:

  • recognize and track the robot;
  • recognize and track the human operator;
  • measure the distance between them;
  • discriminate between safe and unsafe situations;

The safety system is based on two Microsoft Kinect v2 Time-Of-Flight (TOF) cameras. Each TOF camera measures the 3D position of each point in the scene evaluating the time-of-flight of a light signal emitted by the camera and reflected by each point. The cameras are placed on the safety cage of a robotic cell (figure 1) so that the respective field of view covers the entire robotic working space. The 3D point clouds acquired by the TOF cameras are aligned with respect to a common reference system using a suitable calibration procedure [1].

Positions of the TOF cameras on the robotic cell.
Figure 1 – Positions of the TOF cameras on the robotic cell.

The robot and human detections are developed analyzing the RGB-D images (figure 2) acquired by the cameras. These images contain both the RGB information and the depth information of each point in the scene.

RGB-D images captured by the two TOF cameras.
Figure 2 – RGB-D images captured by the two TOF cameras.

The robot recognition and tracking (figure 3) is based on a KLT (Kanade-Lucas-Tomasi) algorithm, using the RGB data to detect the moving elements in a sequence of images [2]. The algorithm analyzes the RGB-D images and finds feature points such as edges and corners (see the green crosses in figure 3). The 3D position of the robot (represented by the red triangle in figure 3) is finally computed by averaging the 3D positions of feature points.

Robot recognition and tracking
Figure 3 – Robot recognition and tracking

The human recognition and tracking (figure 4) is based on the HOG (Histogram of Oriented Gradient) algorithm [3]. The algorithm computes the 3D human position analyzing the gradient orientations of portions of RGB-D images and using them in a trained support vector machine (SVM). The human operator is framed in a yellow box after being detected, and his 3D center of mass is computed (see the red square in figure 4).

Human recognition and tracking
Figure 4 – Human recognition and tracking

Three different safety strategies have been developed. The first strategy is based on the definition of suitable comfort zones of both the human operator and the robotic device. The second strategy implements virtual barriers separating the robot from the operator. The third strategy is based on the combined use of the comfort zones and of the virtual barriers.

In the first strategy, a sphere and a cylinder are defined around the robot and the human respectively, and the distance between them is computed. Three different situations may occur (figure 5):

  1. Safe situation (figure 5.a): the distance is higher than zero and the sphere and the cylinder are far from each other;
  2. Warning situation (figure 5.b): the distance decreases toward zero and sphere and cylinder are very close;
  3. Unsafe situation (figure 5.c): the distance is negative and sphere and cylinder collide.
Monitored situations in the comfort zones strategy. Safe situation (a), warning situation (b), and unsafe situation (c).

Figure 5 – Monitored situations in the comfort zones strategy. Safe situation (a), warning situation (b), and unsafe situation (c).

In the second strategy, two virtual barriers are defined (figure 6). The former (displayed in green in figure 6) defines the limit between the safe zone (i.e. the zone where the human can move safely and the robot can not hit him) and the warning zone (i.e. the zone where the contact between human and robot can happen). The second barrier (displayed in red in figure 6) defines the limit between the warning zone and the error zone (i.e. the zone where the robot works and can easily hit the operator).

Virtual barriers defined in the second strategy

Figure 6 – Virtual barriers defined in the second strategy

The third strategy is a combination of comfort zones and virtual barriers (figure 7). This strategy gives redundant information: both the human-robot distance and positions are considered.

Redundant safety strategy: combination of comfort zones and virtual barriers.

Figure 7 – Redundant safety strategy: combination of comfort zones and virtual barriers.

The safety system shows good performances:

  • The robotic device is always recognized;
  • The human operator is recognized when he moves frontally with respect to the TOF cameras. The human recognition must be improved (for example increasing the number of TOF cameras) in case the human operator moves transversally with respect to the TOF cameras;
  • The safety situations are always identified correctly. The algorithm classifies the safety situations with an average delay of 0.86 ± 0.63s (k=1). This can be improved using a real time hardware.

References

  • Fornaser, P. Tomasin, M. De Cecco, M. Tavernini, M. Zanetti. “Automatic graph based spatiotemporal extrinsic calibration of multiple Kinect V2”, Robotics and autonomous systems, vol. 98, pp. 105-125, Dec. 2017.
  • D. Lucas, T. Kanade. “An iterative image registration technique with an application to stereo vision”, Prooc. of the 7th International Joint Conference on Artificial Intelligence, IJCAI 1981, vol. 2, pp. 674-679, Aug. 1981.
  • Dalal, B. Triggs. “Histogram of oriented gradients for human detection”, Prooc. of 2005 IEEE Computer Society Conference on Computer Vision and Pattern Recognition, CVPR 2005, vol. 1, pp. 886-893, Jun. 2005.