Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Minor changes for 4.1 in tub conversion script and developer doc #708

Merged
merged 4 commits into from
Dec 23, 2020
Merged
Show file tree
Hide file tree
Changes from 1 commit
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
Prev Previous commit
Next Next commit
* Update doc with donkey train command.
* Update doc with developer section for building own models in donkey 4.1
* Integrate changes from PR feedback
  • Loading branch information
DocGarbanzo committed Dec 23, 2020
commit 5465d41a50023e1a484eb15f719da75dd8966e76
32 changes: 23 additions & 9 deletions docs/dev_guide/model.md
Original file line number Diff line number Diff line change
Expand Up @@ -67,6 +67,8 @@ only the image as input.

The function returns a single data item if the model has only one input. You
need to return a tuple if your model uses more input data.


**Note:** _If your model has more inputs, the tuple needs to have the image in
the first place._

Expand All @@ -89,9 +91,11 @@ be fed into `tf.data`. Note, `tf.data` expects a dictionary if the model has
more than one input variable, so we have chosen to use dictionaries also in the
one-argument case for consistency. Above we have shown the implementation in the
base class which works for all models that have only the image as input. You
don't have to overwrite neither `x_transform` nor
`x_translate` if your model only uses the image as input data.
**Note:** _the keys of the dictionary must match the name of the **input**
don't have to overwrite neither `x_transform` nor `x_translate` if your
model only uses the image as input data.


**Note:** _the keys of the dictionary must match the name of the **input**
layers in the model._

```python
Expand All @@ -105,6 +109,8 @@ def y_translate(self, y: XY) -> Dict[str, Union[float, np.ndarray]]:
Similar to the above, this provides the translation of the `y` data into the
dictionary required for `tf.data`. This example shows the implementation of
`KerasLinear`.


**Note:** _the keys of the dictionary must match the name of the **output**
layers in the model._

Expand All @@ -120,8 +126,12 @@ def output_shapes(self):
This function returns a tuple of _two_ dictionaries that tells tensorflow which
shapes are used in the model. We have shown the example of the
`KerasCategorical` model here.
**Note 1:** _The keys of the two dictionaries must match the name of the
**input** and **output** layers in the model._


**Note 1:** _As above, the keys of the two dictionaries must match the name
of the **input** and **output** layers in the model._


**Note 2:** _Where the model returns scalar numbers, the corresponding
type has to be `tf.TensorShape([])`._

Expand All @@ -146,6 +156,8 @@ Here we are showing the implementation of the linear model. Please note that
the input tensor shape always contains the batch dimension in the first
place, hence the shape of the input image is adjusted from
`(120, 160, 3) -> (1, 120, 160, 3)`.


**Note:** _If you are passing another array in the`other_arr` variable, you will
have to do a similar re-shaping.

Expand Down Expand Up @@ -182,7 +194,7 @@ class KerasSensors(KerasPilot):
sensor_in = Input(shape=(self.num_sensors, ), name='sensor_in')
y = sensor_in
z = concatenate([x, y])
# here we add two more dens layers
# here we add two more dense layers
z = Dense(50, activation='relu', name='dense_3')(z)
z = Dropout(drop)(z)
z = Dense(50, activation='relu', name='dense_4')(z)
Expand Down Expand Up @@ -242,9 +254,11 @@ class KerasSensors(KerasPilot):
'n_outputs1': tf.TensorShape([])})
return shapes
```
We could have inherited from `KerasLinear` which would provide the
implementation of `y_transform(), y_translate(), compile()`. The model
requires the sensor data to be an array in the TubRecord with key `"sensor"`.
We could have inherited from `KerasLinear` which already provides the
implementation of `y_transform(), y_translate(), compile()`. However, to
make it explicit for the general case we have implemented all functions here.
The model requires the sensor data to be an array in the TubRecord with key
`"sensor"`.

### Creating a tub

Expand Down
101 changes: 98 additions & 3 deletions docs/utility/donkey.md
Original file line number Diff line number Diff line change
Expand Up @@ -66,13 +66,13 @@ donkey tubclean <folder containing tubs>
* Hit `Ctrl + C` to exit

## Train the model
**Note:** _This section only applies to version 4.x_
The command train the model.
**Note:** _This section only applies to version >= 4.1_
This command trains the model.
```bash
donkey train --tub=<tub_path> [--config=<config.py>] [--model=<model path>] [--model_type=(linear|categorical|inferred)]
```
The `createcar` command still creates a `train.py` file for backward
compatibility, but it's not need and training can be run like this.
compatibility, but it's not required for training.


## Make Movie from Tub
Expand All @@ -95,6 +95,52 @@ donkey makemovie --tub=<tub_path> [--out=<tub_movie.mp4>] [--config=<config.py>]
* optional `--start` and/or `--end` can specify a range of frame numbers to use.
* scale will cause ouput image to be scaled by this amount

## Check Tub

This command allows you to see how many records are contained in any/all tubs. It will also open each record and ensure that the data is readable and intact. If not, it will allow you to remove corrupt records.

> Note: This should be moved from manage.py to donkey command

Usage:

```bash
donkey tubcheck <tub_path> [--fix]
```

* Run on the host computer or the robot
* It will print summary of record count and channels recorded for each tub
* It will print the records that throw an exception while reading
* The optional `--fix` will delete records that have problems

## Augment Tub

This command allows you to perform the data augmentation on a tub or set of tubs directly. The augmentation is also available in training via the `--aug` flag. Preprocessing the tub can speed up the training as the augmentation can take some time. Also you can train with the unmodified tub and the augmented tub joined together.

Usage:

```bash
donkey tubaugment <tub_path> [--inplace]
```

* Run on the host computer or the robot
* The optional `--inplace` will replace the original tub images when provided. Otherwise `tub_XY_YY-MM-DD` will be copied to a new tub `tub_XX_aug_YY-MM-DD` and the original data remains unchanged


## Histogram

This command will show a pop-up window showing the histogram of record values in a given tub.

> Note: This should be moved from manage.py to donkey command

Usage:

```bash
donkey tubhist <tub_path> --rec=<"user/angle">
```

* Run on the host computer

* When the `--tub` is omitted, it will check all tubs in the default data dir

## Plot Predictions

Expand All @@ -113,6 +159,55 @@ donkey tubplot <tub_path> [--model=<model_path>]
* Will show a pop-up window showing the plot of steering values in a given tub compared to NN predictions from the trained model
* When the `--tub` is omitted, it will check all tubs in the default data dir

## Continuous Rsync

This command uses rsync to copy files from your pi to your host. It does so in a loop, continuously copying files. By default, it will also delete any files
on the host that are deleted on the pi. This allows your PS3 Triangle edits to affect the files on both machines.

Usage:

```bash
donkey consync [--dir = <data_path>] [--delete=<y|n>]
```

* Run on the host computer
* First copy your public key to the pi so you don't need a password for each rsync:

```bash
cat ~/.ssh/id_rsa.pub | ssh pi@<your pi ip> 'cat >> .ssh/authorized_keys'
```

* If you don't have a id_rsa.pub then google how to make one
* Edit your config.py and make sure the fields `PI_USERNAME`, `PI_HOSTNAME`, `PI_DONKEY_ROOT` are setup. Only on windows, you need to set `PI_PASSWD`.
* This command may be run from `~/mycar` dir

## Continuous Train

This command fires off the keras training in a mode where it will continuously look for new data at the end of every epoch.

Usage:

```bash
donkey contrain [--tub=<data_path>] [--model=<path to model>] [--transfer=<path to model>] [--type=<linear|categorical|rnn|imu|behavior|3d>] [--aug]
```

* This command may be run from `~/mycar` dir
* Run on the host computer
* First copy your public key to the pi so you don't need a password for each rsync:

```bash
cat ~/.ssh/id_rsa.pub | ssh pi@<your pi ip> 'cat >> .ssh/authorized_keys'
```

* If you don't have a id_rsa.pub then google how to make one
* Edit your config.py and make sure the fields `PI_USERNAME`, `PI_HOSTNAME`, `PI_DONKEY_ROOT` are setup. Only on windows, you need to set `PI_PASSWD`.
* Optionally it can send the model file to your pi when it achieves a best loss. In config.py set `SEND_BEST_MODEL_TO_PI = True`.
* Your pi drive loop will autoload the weights file when it changes. This works best if car started with `.json` weights like:

```bash
python manage.py drive --model models/drive.json
```

## Joystick Wizard

This command line wizard will walk you through the steps to create a custom/customized controller.
Expand Down