eYSIP-24_Field_Exploration_ROS_Vehicles

Advancing Field Exploration Using ROS-powered Robotic Vehicles

eysip_2024_expo_poster_page-0001

Click the above image for Project Video Demonstration

ARMS Lab, Systems and Control engineering department IIT Bombay & e-Yantra

GitHub repo size GitHub contributors GitHub last commit GitHub issues GitHub pull requests GitHub

Table of Contents

  1. Introduction
  2. Project Goal
  3. Tech Stack
  4. Setup
  5. Key Challenges
  6. Hardware Development
  7. Control System Design
  8. ROS2-Teleoperation
  9. Localization and Odometry
  10. Motion Planning
  11. SLAM
  12. Future Developement

Introduction

Field exploration robots have gained significant attention in recent years due to their potential applications in various domains such as agriculture, environmental monitoring, and autonomous driving research. These robots are designed to operate autonomously in diverse environments, collecting data and performing tasks that would otherwise be labor-intensive or hazardous for humans. This project focuses on developing a ROS-powered robotic vehicle designed for field exploration. The vehicle is capable of traversing varied ground surfaces autonomously and can be used for tasks such as autonomous field mapping, agricultural plant monitoring, and testing autonomous driving algorithms.

Project Goal

The primary goal of this project is to develop a ROS2-enabled four-wheel-drive vehicle with Ackermann steering capable of waypoint navigation and obstacle avoidance. This vehicle will have various sensors and an onboard computer to performSimultaneous Localization and Mapping (SLAM). The project encompasses hardware development, sensor integration, firmware and algorithm development, and extensive testing and validation to ensure the vehicle’s performance and reliability in real-world scenarios.

Tech Stack

ROS Logo Jazzy Logo Arduino LogoRaspberry Pi Logo Nav2 Logo

Setup

Prerequisites

Remote Desktop Setup

  1. Install ROS2 Jazzy:

    After installing run this command once

    source /opt/ros/humble/setup.bash
    echo "source /opt/ros/humble/setup.bash" >> ~/.bashrc
    source ~/.bashrc
    
    
  2. Create a workspace and clone the repository

    mkdir -p ~/ros2_ws/src      # Create a workspace , builld and source it 
    cd ~/ros2_ws
    colcon build --symlink install
    source install/local_setup.bash
    
    cd /pi_ws/src      # To clone the repo inside src folder 
    git clone https://github.com/TahsinOP/eYSIP-24_Field_Exploration_ROS_Vehicles.git
    
    

    To avoid sourcing everytime you open a new terminal instance run this once ,

    echo "source ~/ros2_ws/install/setup.bash" >> ~/.bashrc
    source ~/.bashrc      # source bashrc as we have made changes
    

Interfacing Hardware

Raspberry Pi 5

  1. Clone the repository
    cd /pi_ws/src
    git clone https://github.com/TahsinOP/eYSIP-24_Field_Exploration_ROS_Vehicles.git
    
    • USB serial Permissions Add dialoutto groups
   sudo usermod -aG dialout $USER

BMX 160 IMU

Dependencies

  1. Install smbus :
    sudo apt update
    sudo apt install python3-smbus
    

    IMU node

  2. Launch IMU node:
    ros2 launch imu_pkg imu_rviz_robot.launch.py
    

    Logging Data

  3. Check IMU data:
    ros2 topic echo /imu_raw
    

    Ydlidar Tmini-Pro

    Connect and Launch the Lidar driver

  4. Ensure the YDLidar is connected to a USB port and powered on

  5. Launch the YDLidar node with the following command:
    ros2 launch ydlidar_ros2_driver ydlidar_launch.py
    
  6. Visualize Lidar scan data:
    ros2 launch rcar_viz display.launch.py
    

    This will launch Rviz2 with a robot model and fixed frame as "odom"

  7. Optional: Run IMU Pose Estimation: To see the Odom data, you can run the following node (optional):
    ros2 run imu_pose_estimation estimator
    

    Alternatively, set the fixed frame to "base_link" in RViz to see laser data on the /scantopic.

Key Challenges

Hardware Development

car

Figure 1: Prototype Vehicle

Control System Design

System Design

Figure 2: Control System Design

1. Low-Level Control

2. Onboard Raspberry Pi 5

3. Remote PC

Summary

ROS2-Teleoperation

Achieving complete low-level control of the car through keyboard commands using ROS2 for communication over a shared network is a significant milestone. This setup allows for real-time teleoperation, enabling users to control the car remotely using simple keyboard inputs. By implementing teleoperation nodes in ROS 2, commands can be sent to the robot, providing a responsive and reliable control system.

Implementation Steps:

  1. Install necessary ROS 2 packages for teleoperation.
  2. Write a ROS 2 node to capture keyboard inputs and publish velocity commands.
  3. Configure the robot to subscribe to these velocity commands and actuate the motors accordingly.
  4. Test the teleoperation setup to ensure reliable and responsive control.

How to Run the Teleop Node:

  1. SSH into Raspberry Pi:
    ssh arms@192.168.0.171    # Change the IP address and Pi name  accordingly 
    
  2. Run Subscriber Node On Pi/Remote PC
     ros2 run teleop_bot sub
    
  3. Run Publisher Node On Pi
    ros2 run teleop_bot pub
    

    If u have less computation on Pi run the publisher on the remote PC using the same command stated above

Now u can control the car using WASD keys for movement :

Localization and Odometry

To run the odometry Node

  1. SSH into Raspberry Pi:
    ssh arms@192.168.0.171    # Change the IP address and Pi name  accordingly 
    
  2. Run Odom Node On remote PC
     ros2 run localization imu_odometry
    
    

    This will give you the (x,y,yaw) data of the vehicle , and also publish data on /odom topic

Motion Planning

SLAM

  1. Map the environment using slam_toolbox ( Already done and saved in the maps folder)
  2. Launch the Odometery node using the steps given in the previous sections
  3. Launch the Navigation and Localization launch files
  4. Run the GoalNode and pure pursuit script
  5. Give a 2D goal pose in Nav2 Rviz

Future Development

  1. Upgrading Hardware for Outdoor Use: Enhance the hardware to function robustly in outdoor environments and incorporate GPS for improved outdoor localization.

  2. Developing a Mission Planner Stack: Create a mission planner stack capable of generating and updating paths for the vehicle based on environmental variables, enabling more dynamic and adaptive route planning.

  3. Implementing State-of-art Algorithms: Integrate deep learning and reinforcement learning-based algorithms to achieve more adaptive and robust navigation, enabling the vehicle to better handle diverse and unpredictable conditions. Upgrade the algorithms to better handle dynamic environments, both indoor and outdoor, as they are currently designed for static settings.

  4. Enhanced Sensor Integration: Incorporate additional sensors such as RGB-depth cameras, and ultrasonic sensors to improve environmental perception, and implement advanced algorithms such as ORB-SLAM.