Skip to content
/ rlkit Public
forked from rail-berkeley/rlkit

Collection of reinforcement learning algorithms

Notifications You must be signed in to change notification settings

cdevin/rlkit

 
 

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

29 Commits
 
 
 
 
 
 
 
 
 
 

Repository files navigation

README last updated on: 02/19/2018

rlkit

Reinforcement learning framework and algorithms implemented in PyTorch.

Some implemented algorithms:

To get started, checkout the example scripts, linked above.

Installation

Install and use the included Ananconda environment

$ conda env create -f docker/rlkit/rlkit-env.yml
$ source activate rlkit
(rlkit) $ python examples/ddpg.py

There is also a GPU-version in docker/rlkit-gpu

$ conda env create -f docker/rlkit_gpu/rlkit-env.yml
$ source activate rlkit-gpu
(rlkit-gpu) $ python examples/ddpg.py

NOTE: these Anaconda environments use MuJoCo 1.5 and gym 0.10.5, unlike previous versions.

For an even more portable solution, try using the docker image provided in docker/rlkit_gpu. The Anaconda env should be enough, but this docker image addresses some of the rendering issues that may arise when using MuJoCo 1.5 and GPUs. To use the GPU docker image, you will need a GPU and nvidia-docker installed. Note that you'll need to get your own MuJoCo key if you want to use MuJoCo.

Visualizing a policy and seeing results

During training, the results will be saved to a file called under

LOCAL_LOG_DIR/<exp_prefix>/<foldername>
  • LOCAL_LOG_DIR is the directory set by rlkit.launchers.config.LOCAL_LOG_DIR. Default name is 'output'.
  • <exp_prefix> is given either to setup_logger.
  • <foldername> is auto-generated and based off of exp_prefix.
  • inside this folder, you should see a file called params.pkl. To visualize a policy, run
(rlkit) $ python scripts/sim_policy.py LOCAL_LOG_DIR/<exp_prefix>/<foldername>/params.pkl

If you have rllab installed, you can also visualize the results using rllab's viskit, described at the bottom of this page

tl;dr run

python rllab/viskit/frontend.py LOCAL_LOG_DIR/<exp_prefix>/

Alternatively, if you don't want to clone all of rllab, a repository containing only viskit can be found here. Then you can similarly visualize results with.

python viskit/viskit/frontend.py LOCAL_LOG_DIR/<exp_prefix>/

Visualizing a TDM policy

To visualize a TDM policy, run

(rlkit) $ python scripts/sim_tdm_policy.py LOCAL_LOG_DIR/<exp_prefix>/<foldername>/params.pkl

Algorithm-Specific Comments

SAC

The SAC implementation provided here only uses Gaussian policy, rather than a Gaussian mixture model, as described in the original SAC paper.

Credits

A lot of the coding infrastructure is based on rllab. The serialization and logger code are basically a carbon copy of the rllab versions.

The Dockerfile is based on the OpenAI mujoco-py Dockerfile.

About

Collection of reinforcement learning algorithms

Resources

Stars

Watchers

Forks

Packages

No packages published

Languages

  • Python 98.6%
  • Dockerfile 1.4%