First video of the project

Project date:

June 2020

Project contributors:

I am the sole project designer

Project hardware:

I used a UR3 robot, an Intel Realsense camera and a laptop running Linux.

Project Description:

The goal of this project is to autonomously detect the blue circle and then follow it with the UR3 robot. The camera is attached to the end effector of the robot.

First, the camera takes a picture, then my computer vision algorithm will detect the centre of the blue circle in the image, and it calculates the angle of that centre with respect to the centre of the image. Then it will determine in with direction in the XY plane the robot should move. Then it passes this information to my custom UR3 kinematics algorithm. This algorithm calculates what velocity each joint must reach in order to achieve the desired movement.

I also wrote a custom collision avoidance algorithm that runs concurrently with the main program. This collision avoidance program constantly watches the joint states of the robot to make sure the robot will not crash to itself or the ground. If it sees that the robot is about to crash, it will publish a no-go command and stops the robot, otherwise, it publishes a go command. This algorithm publishes a go/no-go command more than 56,000 times each second.

Once the main program calculates the desired joint velocities, before it sends these velocities to the robot, it first checks this go/no-go commands that are coming out of the collision avoidance system. If the command is a go, then it will send the velocities to the robot, but if the command is a no-go, then it will not do anything until it receives a go command again.

The computer vision and kinematics algorithms are implemented in one node in MATLAB; The collision avoidance node is implemented in C++. These nodes communicate with each other and the UR3 robot using the Robot Operating System or ROS.

The above video is in real-time and has not sped up. The single joint velocity is limited to 0.47 radians/s. I also speak about my collision avoidance system with my lab supervisor.

Second video of the project

This video is also in real-time and has not sped up. The single joint velocity limit is increased from 0.47 radians/s to 0.67 radians/s. We can see that there is a small performance increase, but the robot is wobbling when it reaches its target.

This wobbling behaviour is due to the limited speed of the camera. The camera can only take 30 frames per second. As I increase the single joint velocity limit, the robot joints can move faster, and in some instances, they move beyond the target point before the next correction can be made. Then once the system wants to correct the position and bring the robot back to the target position, it is so fast and so little distance that again there is not enough time to correct before reaching the target point.

There is two main way that this can be avoided on the next attempt:
1- Using a camera with a higher speed of frames per second. For example a 90 or 120 fps camera.
2- Having variable single joint speed limit. Since this issue mostly happens when the robot is nearing its target position, we can have an initial higher limit, and as the robot gets closer to the target, the single joint speed limit will also decrease.

My Linkedin account