This project demonstrates the implementation of a Q-learning algorithm enhancement to control a TurtleBot3 robot. The robot navigates different environments simulated in Gazebo, learns to reach target positions from various initial conditions, and avoids obstacles using reinforcement learning.
Ensure you have the following installed:
- ROS (Robot Operating System)
- Gazebo
- TurtleBot3 packages
-
Install TurtleBot3 Gazebo package:
sudo apt-get install ros-$ROS_DISTRO-turtlebot3-gazebo
-
Set TurtleBot3 Model:
export TURTLEBOT3_MODEL=waffle
-
Clone the repository:
cd ~/catkin_ws/src git clone https://github.com/AliiRezaei/turtlebot3_rl.git
-
Build the workspace:
cd ~/catkin_ws catkin_make
-
Start ROS master:
roscore
-
Launch the simulation environments and run the learning nodes.
-
Launch Empty World in Gazebo:
roslaunch turtlebot3_rl turtlebot3_env_empty_world.launch
-
Run the Empty World learning node:
rosrun turtlebot3_rl learning_empty_world
-
Wait for the robot to complete learning and observe the results.
-
Launch House in Gazebo:
roslaunch turtlebot3_rl turtlebot3_env_house.launch
-
Run the House learning node:
rosrun turtlebot3_rl learning_house
-
Wait for the robot to complete learning and observe the results.
-
Launch World in Gazebo:
roslaunch turtlebot3_rl turtlebot3_env_world.launch
-
Run the World learning node:
rosrun turtlebot3_rl learning_world
-
Wait for the robot to complete learning and observe the results.
Learning data is logged in the LogData
directory:
- Empty World: 50,000 episodes
- House: 30,000 episodes
- World: 25,000 episodes
This project showcases the efficiency of a Q-learning algorithm implemented in C++ for controlling the TurtleBot3 robot. The robot demonstrates significant learning performance improvements, achieving training in approximately 30 seconds compared to 13 hours with MATLAB.
For campare the MATLAB and C++ performance, run the MATLAB related script from MATLAB Scripts
directory.