Skip to content

Collection of reinforcement learning algorithms

License

Notifications You must be signed in to change notification settings

zhangbinxy/rlkit

 
 

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

37 Commits
 
 
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

README last updated on: 02/19/2018

rlkit

Reinforcement learning framework and algorithms implemented in PyTorch.

Some implemented algorithms:

To get started, checkout the example scripts, linked above.

What's New

10/16/2018

  • Upgraded to PyTorch v0.4
  • Added Twin Soft Actor Critic Implementation
  • Various small refactor (e.g. logger, evaluate code)

Installation

Install and use the included Ananconda environment

$ conda env create -f docker/rlkit/rlkit-env.yml
$ source activate rlkit
(rlkit) $ python examples/ddpg.py

There is also a GPU-version in docker/rlkit-gpu

$ conda env create -f docker/rlkit_gpu/rlkit-env.yml
$ source activate rlkit-gpu
(rlkit-gpu) $ python examples/ddpg.py

NOTE: these Anaconda environments use MuJoCo 1.5 and gym 0.10.5, unlike previous versions.

For an even more portable solution, try using the docker image provided in docker/rlkit_gpu. The Anaconda env should be enough, but this docker image addresses some of the rendering issues that may arise when using MuJoCo 1.5 and GPUs. To use the GPU docker image, you will need a GPU and nvidia-docker installed. Note that you'll need to get your own MuJoCo key if you want to use MuJoCo.

Visualizing a policy and seeing results

During training, the results will be saved to a file called under

LOCAL_LOG_DIR/<exp_prefix>/<foldername>
  • LOCAL_LOG_DIR is the directory set by rlkit.launchers.config.LOCAL_LOG_DIR. Default name is 'output'.
  • <exp_prefix> is given either to setup_logger.
  • <foldername> is auto-generated and based off of exp_prefix.
  • inside this folder, you should see a file called params.pkl. To visualize a policy, run
(rlkit) $ python scripts/sim_policy.py LOCAL_LOG_DIR/<exp_prefix>/<foldername>/params.pkl

If you have rllab installed, you can also visualize the results using rllab's viskit, described at the bottom of this page

tl;dr run

python rllab/viskit/frontend.py LOCAL_LOG_DIR/<exp_prefix>/

to visualize all experiments with a prefix of exp_prefix. To only visualize a single run, you can do

python rllab/viskit/frontend.py LOCAL_LOG_DIR/<exp_prefix>/<folder name>

Alternatively, if you don't want to clone all of rllab, a repository containing only viskit can be found here. You can similarly visualize results with.

python viskit/viskit/frontend.py LOCAL_LOG_DIR/<exp_prefix>/

This viskit repo also has a few extra nice features, like plotting multiple Y-axis values at once, figure-splitting on multiple keys, and being able to filter hyperparametrs out.

Visualizing a TDM policy

To visualize a TDM policy, run

(rlkit) $ python scripts/sim_tdm_policy.py LOCAL_LOG_DIR/<exp_prefix>/<foldername>/params.pkl

Algorithm-Specific Comments

TDM

Recommended hyperparameters to tune:

  • max_tau
  • reward_scale

SAC

The SAC implementation provided here only uses Gaussian policy, rather than a Gaussian mixture model, as described in the original SAC paper. Recommended hyperparameters to tune:

  • reward_scale

Twin SAC

This quite literally combines TD3 and SAC. Recommended hyperparameters to tune:

  • reward_scale

Credits

A lot of the coding infrastructure is based on rllab. The serialization and logger code are basically a carbon copy of the rllab versions.

The Dockerfile is based on the OpenAI mujoco-py Dockerfile.

TODOs

  • Include policy-gradient algorithms.
  • Include model-based algorithms.

About

Collection of reinforcement learning algorithms

Resources

License

Stars

Watchers

Forks

Packages

No packages published

Languages

  • Python 98.6%
  • Dockerfile 1.4%