VISUAL SLAM NAVIGATION WITH LIMO ROS2 ROBOT
System Architecture
The LIMO robot uses camera input for ArUco marker detection and visual SLAM localization, while LiDAR provides real-time obstacle avoidance. The ROS2-based navigation stack processes sensor data to calculate optimal paths to the centroid of detected markers.
Key Contributions:
- Developed ROS2-based system for autonomous navigation toward centroid of four ArUco markers using camera input
- Implemented OpenCV-based marker detection and pose estimation
- Integrated LiDAR-based obstacle avoidance with visual SLAM localization
- Worked within constraints of preinstalled ROS2 Humble packages
- Designed navigation algorithms to calculate optimal paths to marker centroids
- Conducted extensive testing in both simulated and real-world environments
Core Technologies:
Project Description:
This autonomous robotics project implemented a complete visual SLAM navigation system for the LIMO robot using ROS2. The system combines computer vision (ArUco marker detection) with LiDAR-based obstacle avoidance to enable reliable autonomous navigation in constrained environments.
The robot calculates the centroid position from four detected ArUco markers and navigates toward this point while avoiding obstacles. The implementation required careful integration of multiple sensor inputs and coordination between various ROS2 packages while working within the constraints of the preinstalled ROS2 Humble environment.
Challenges included sensor fusion between visual and LiDAR data, accurate marker detection under varying lighting conditions, and ensuring real-time performance of the navigation stack. The project demonstrated robust performance in both simulated (Gazebo) and real-world testing environments.