Skip to content

JackalCrowdEnv, an OpenAI gym style environment for running navigation tasks with the Jackal robot in Gazebo and ROS

License

Notifications You must be signed in to change notification settings

AMR-/JackalCrowdEnv

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

18 Commits
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

This repository contains

  • JackalCrowdEnv, an OpenAI gym style environment for running navigation tasks with the Jackal robot in Gazebo and ROS
  • Documentation for the above, including explanation of various scenarios used

Installation and Setup

System requirements:

    Ubuntu: 18.04
    ROS: melodic
    python: 3.6
    Anaconda

Install ROS Jackal packages:

	sudo apt-get install ros-melodic-jackal*

If you have no catkin workspace yet, create one:

   mkdir -p ~/catkin_ws/src
   cd catkin_ws/src
   echo "source ~/catkin_ws/devel/setup.bash" >> ~/.bashrc

If you already have one, then navigate to its src folder.

Either way, continue below:

   git clone https://github.com/AMR-/JackalCrowdEnv.git
   cd ..
   catkin_make
   source devel/setup.bash

To get the packages in virtual environment:

pip install -f packages.txt

Usage

Note: the ROS name for the package is naviswarm

Bring up the simulator

In one terminal:

    roslaunch naviswarm jackal_world.launch

The above command must be run ahead of instantiating the CrowdEnv object.

Import CrowdEnv

Import CrowdEnv

from crowdenv.rl import CrowdENV

Instatiate CrowdEnv, here are some examples:

env = CrowdENV()

env = CrowdENV(scenarios_index=10, 
               max_steps=1000, 
               random_seed=0)

Here is an explanation of each of the arguments of CrowdEnv:

  • scenarios_index: int - indicating the id of the environmental scenario (obstacle and goal configuration) to set up in gazebo. Default is the empty scenario. See the Scenarios Section Below
  • collision_threshold: float - how close (in meters) robot must be to object for it to be considered a collision
  • target_threshold: float - how close (in meters) robot must be to goal for it to be considered a goal
  • step_time: float - how many seconds per timestep (default 0.1)
  • max_steps: int - maximum number of timesteps per episode even if goal is not reached and collisio does not occur
  • random_seed: int - there is some randomness used in the environment, use this to set the random seed
  • vel_expanded: bool - set to False (default) to use the 6 action action space, or True to use the expanded 10 action action space. See details in Action Spaces below

Scenarios

# Description
0 Empty env. where goal is 2m up right side of robot
1 Empty env. where goal is 2m down right side of robot
2 Empty env. where goal is 2m ahead robot
3 Empty env. where goal is 2m on back of robot
4 Empty env. where goal is 10m up right side of robot
5 Empty env. where goal is 10m down right side of robot
6 Empty env. where goal is 10m ahead robot
7 Empty env. where goal is 10m on back of robot
8 Empty env. in the down side area where goal and start locations are all random
9 Empty env. in the up side area where goal and start locations are all random
10 Cross shape obstacle in middle of the area, robot starts from left side of the obstacle
11 Cross shape obstacle in middle of the area, robot starts from right side of the obstacle
12 Diomand shape obstacle in middle of the area, robot starts from left side of the obstacle
13 Diomand shape obstacle in middle of the area, robot starts from right side of the obstacle

Action Spaces

Standard Action Space:

# Description Linear Vel. Angular Vel
0 Forward Right 1 -1
1 Rotate Right 0 -1
2 Straight Forward 1 0
3 Stop 0 0
4 Forward Left 1 1
5 Rotate Left 0 1

Expanded Action Space:

# Description Linear Vel. Angular Vel
0-5 (Same as standard)
6 Slightly Forward Right 0.5 -0.5
7 Slightly Rotate Right 0 -0.5
8 Slightly Forward Left 0.5 0.5
9 Slightly Rotate Left 0 0.5

State Space

content: | goal | velocity | Occupancy Grid of Lidar |

size: | 2 | 1 | 210 |

Credits

These environments were created by Aaron M. Roth and Jing Liang. To cite this please cite our paper "XAI-N: Sensor-based Robot Navigation using Expert Policies and Decision Trees."

@inproceedings{roth2021xain,
  title={XAI-N: Sensor-based Robot Navigation using Expert Policies and Decision Trees},
  author={Roth, Aaron M. and Liang, Jing and Manocha, Dinesh},
  booktitle={2021 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS)},
  year={2021},
  organization={IEEE}
}

About

JackalCrowdEnv, an OpenAI gym style environment for running navigation tasks with the Jackal robot in Gazebo and ROS

Topics

Resources

License

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published