Project date:

October 2019

Project contributors:

Group project. I was the lead project designer and team leader of the group.

Project hardware:

I used a Turtlebot robot and a laptop running Linux.

Project Description:

The robot will be placed somewhere in a known maze. It needs to first localise itself and then autonomously search the maze for a red brick. Once it found the red brick, it must drive toward the red brick.

The first step is to map the maze using the Turtlebot's laser scanner. I used a mapping program called G-mapping. This is a form of Simultaneous Localisation And Mapping or SLAM. I first manually drive the robot very slowly around the maze. The robot is constantly recording values from its laser scanner. This information will be processed to form a unified map of the maze which then will be stored in my laptop's hard drive.

Then, next time that we put the robot anywhere in the maze, it will then load up this stored map and then rotates a few times and makes new lear scan measurements and compares these new measurements with the stored map. This way, it can localise itself and know where in the maze it is.

Finally, the robot can begin its autonomous search for the red brick. My chosen search algorithm was to divide the whole maze into a number of separate squares and then search each square until the red break is found. The search algorithm will use the onboard RGB camera to constantly take pictures and uses computer vision to look for the red brick in those. At the same time, the localisation algorithm uses odometry and laser scanner data to continuously adjust the location of the robot on the map. This is because the odometry data is not very accurate. If you localise once in the beginning and only rely on the robot's odometry to update your current position, after a while, your assumed position will drift from the actual position of the robot in the maze.

This project is entirely implemented in a single node in my laptop using C++. This node communicates with the Turtlebot robot using the Robot Operating System or ROS.

The following video is in real-time and has not sped up. It shows an example of conducting G-mapping on a real maze with the real Turtlebot. I use a wireless keyboard to manually drive the robot around the maze, and you can see in the computer screen that the maze map is slowly being built.

The following video is also in real-time and has not sped up. It shows an example of initial localisation and then searching a maze environment in Gazeebo simulation. In the beginning, the robot has a few possibilities of where on the map it is. But after a few turns and continuous laser data measurements and processing, it finds its true position in the map. Then it starts the autonomous search of the maze until it finds the red break and then drives towards it.

My Linkedin account