Skip to content
This repository has been archived by the owner on Nov 10, 2022. It is now read-only.

Fork of https://github.com/s…ixiang/rllabplusplus with modifications for the paper "The Mirage of Action-Dependent Baselines in Reinforcement Learning".

License

Notifications You must be signed in to change notification settings

brain-research/mirage-rl-qprop

 
 

Repository files navigation

Mirage Experiments

The data for all the Q-Prop results is contained in data/local/*. To run the plotter to get the same results as in the paper, you can run python plot_rewards.py or you can run python plot_rewards.py --mini to generate the same plot where each subfigure has its own legend (useful for cropping).

NOTE: Running the experiments found in sandbox/rocky/tf/launchers/sample_run.sh might throw a ModuleNotFoundError. To fix this, add the top-level folder to your environment variable PYTHONPATH.

rllab++

rllab++ is a framework for developing and evaluating reinforcement learning algorithms, built on rllab. It has the following implementations besides the ones implemented in rllab:

The codes are experimental, and may require tuning or modifications to reach the best reported performances.

Installation

Please follow the basic installation instructions in rllab documentation.

Examples

From the launchers directory, run the following, with optional additional flags defined in launcher_utils.py:

python algo_gym_stub.py --exp=<exp_name>

Flags include:

The experiment will be saved in /data/local/<exp_name>.

Citations

If you use rllab++ for academic research, you are highly encouraged to cite the following papers:

About

Fork of https://github.com/s…ixiang/rllabplusplus with modifications for the paper "The Mirage of Action-Dependent Baselines in Reinforcement Learning".

Resources

License

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Languages

  • Python 88.6%
  • Jupyter Notebook 7.3%
  • JavaScript 1.4%
  • HTML 0.7%
  • Shell 0.7%
  • Ruby 0.6%
  • Other 0.7%