Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

subt/doc/virtual-howto.md: update to the current state #726

Merged
merged 5 commits into from
Nov 13, 2020
Merged
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
46 changes: 30 additions & 16 deletions subt/doc/virtual-howto.md
Original file line number Diff line number Diff line change
Expand Up @@ -22,29 +22,43 @@ docker pull osrf/subt-virtual-testbed:cloudsim_sim_latest
docker pull osrf/subt-virtual-testbed:cloudsim_bridge_latest
```

## Precoditions

3D X server needs to be running under the same user that is going to be running the simulation.
If the X server runs display `:0` the variable `DISPLAY` needs to be set and exported
```
export DISPLAY=:0
```
Also `XAUTHORITY` variable needs to be set to point to the location of the XAuthority file. Example on
ubuntu could be
```
$ echo $XAUTHORITY
/run/user/1000/gdm/Xauthority
```
Another possiblity is to run `xhost +` to disable the security on the X server or `xhost +local:` to allow
any local user to access the X server.

The main point here is that the simulation can find and access the 3D X server.

## Build and run locally simple unittest image
```
./subt/docker/build.bash unittest
./subt/docker/run.bash unittest
```

It is necessary to run also simulation and bridge in other two terminals:
- terminal 1
```
xhost +local:root
zbynekwinkler marked this conversation as resolved.
Show resolved Hide resolved
ROBOT=X0F200L WORLD=urban_circuit_practice_01 ./subt/script/sim.bash
ROBOT=X100L WORLD=simple_cave_01 ./subt/script/sim.bash
```
Note, that `xhost` workaround is not secure and it is current workaround how to start the process for the first time,
open screen session and then use the simulator remotely via ssh.

- terminal 2
```
ROBOT=X0F200L WORLD=urban_circuit_practice_01 ./subt/script/bridge.bash
ROBOT=X100L WORLD=simple_cave_01 ./subt/script/bridge.bash
```

Note, that configuration and robot name is variable. The command above with
robot name X0F200L encodes waiting for 0 s, exploring for 200 s and navigating
along a wall on the left. Our own ROBOTIKA_X2_SENSOR_CONFIG_1 is used. It is
robot name X100L encodes exploring for 100 s and navigating
along a wall on the left. Our own ROBOTIKA_X2_SENSOR_CONFIG_1 is used by default. It is
a small robot, 30m lidar, 640x380 RGBD camera and gas detector.

The unittest should display number of received messages for scan, image, imu, odometry and clock. After 10000 clock
Expand All @@ -66,32 +80,32 @@ solution not covered by this HOWTO.
```commandline
./subt/docker/run.bash robotika
```
which by default runs `./src/osgar/subt/docker/robotika/run_solution.bash` inside the container.
To get a shell inside the docker instead, run
which by default runs `/osgar-ws/run_solution.bash` which itself is soft link to
`/osgar-ws/src/osgar/subt/docker/robotika/run_solution.bash` inside the container.
To get a shell inside the container instead, run
```commandline
./subt/docker/run.bash robotika bash
```
and you can call `run_solution.bash` when you are ready.

A copy of `osgar` directory from the time of the build of the image is located
at `/home/developer/subt_solution/osgar`. For local development it is advantageous
at `/osgar-ws/src/osgar/`. For local development it is advantageous
to mount your `osgar` directory from the host over this directory in the container.

```commandline
./subt/script/devel.bash
```

When you do so, you can edit the files as you are used to. To rebuild the ROS
nodes from within the running container, switch to `/osgar-ws/build/` directory
and call `make`. After that running
`./src/osgar/subt/docker/robotika/run_solution.bash` will run the rebuilt version.
nodes from within the running container, call `make` in `/osgar-ws/` directory
After that running `/osgar-ws/run_solution.bash` will run the rebuilt version.

At this moment you should see waiting for ROS master, debug count outputs of received messages
(similarly as in unittest) and Python3 outputs of robot navigating towards the gate. The exploration reports
number and type of received messages by OSGAR.

There is a logfile available when the robot finishes. It is in the current
directory with name zmq*.log. It is necessary to upload the logfile (for example
There is a logfile available when the robot finishes in `/osgar-ws/logs/`.
It is necessary to copy the logfile (for example
via `docker cp`) to the host for further analysis, otherwise
it will be lost with the termination of the container.

Expand Down