README last updated on: 02/19/2018
Reinforcement learning framework and algorithms implemented in PyTorch.
Some implemented algorithms:
- Temporal Difference Models (TDMs)
- Deep Deterministic Policy Gradient (DDPG)
- (Double) Deep Q-Network (DQN)
- Soft Actor Critic (SAC)
- Twin Dueling Deep Determinstic Policy Gradient (TD3)
- Twin Soft Actor Critic (Twin SAC)
- example script
- Combination of SAC and TD3
To get started, checkout the example scripts, linked above.
10/16/2018
- Upgraded to PyTorch v0.4
- Added Twin Soft Actor Critic Implementation
- Various small refactor (e.g. logger, evaluate code)
Install and use the included Ananconda environment
$ conda env create -f docker/rlkit/rlkit-env.yml
$ source activate rlkit
(rlkit) $ python examples/ddpg.py
There is also a GPU-version in docker/rlkit-gpu
$ conda env create -f docker/rlkit_gpu/rlkit-env.yml
$ source activate rlkit-gpu
(rlkit-gpu) $ python examples/ddpg.py
NOTE: these Anaconda environments use MuJoCo 1.5 and gym 0.10.5, unlike previous versions.
For an even more portable solution, try using the docker image provided in docker/rlkit_gpu
.
The Anaconda env should be enough, but this docker image addresses some of the rendering issues that may arise when using MuJoCo 1.5 and GPUs.
To use the GPU docker image, you will need a GPU and nvidia-docker installed.
Note that you'll need to get your own MuJoCo key if you want to use MuJoCo.
During training, the results will be saved to a file called under
LOCAL_LOG_DIR/<exp_prefix>/<foldername>
LOCAL_LOG_DIR
is the directory set byrlkit.launchers.config.LOCAL_LOG_DIR
. Default name is 'output'.<exp_prefix>
is given either tosetup_logger
.<foldername>
is auto-generated and based off ofexp_prefix
.- inside this folder, you should see a file called
params.pkl
. To visualize a policy, run
(rlkit) $ python scripts/sim_policy.py LOCAL_LOG_DIR/<exp_prefix>/<foldername>/params.pkl
If you have rllab installed, you can also visualize the results
using rllab
's viskit, described at
the bottom of this page
tl;dr run
python rllab/viskit/frontend.py LOCAL_LOG_DIR/<exp_prefix>/
to visualize all experiments with a prefix of exp_prefix
. To only visualize a single run, you can do
python rllab/viskit/frontend.py LOCAL_LOG_DIR/<exp_prefix>/<folder name>
Alternatively, if you don't want to clone all of rllab
, a repository containing only viskit can be found here. You can similarly visualize results with.
python viskit/viskit/frontend.py LOCAL_LOG_DIR/<exp_prefix>/
This viskit
repo also has a few extra nice features, like plotting multiple Y-axis values at once, figure-splitting on multiple keys, and being able to filter hyperparametrs out.
To visualize a TDM policy, run
(rlkit) $ python scripts/sim_tdm_policy.py LOCAL_LOG_DIR/<exp_prefix>/<foldername>/params.pkl
Recommended hyperparameters to tune:
max_tau
reward_scale
The SAC implementation provided here only uses Gaussian policy, rather than a Gaussian mixture model, as described in the original SAC paper. Recommended hyperparameters to tune:
reward_scale
This quite literally combines TD3 and SAC. Recommended hyperparameters to tune:
reward_scale
A lot of the coding infrastructure is based on rllab. The serialization and logger code are basically a carbon copy of the rllab versions.
The Dockerfile is based on the OpenAI mujoco-py Dockerfile.
- Include policy-gradient algorithms.
- Include model-based algorithms.