Skip to content

Commit

Permalink
fix a bug in DDPG.py.
Browse files Browse the repository at this point in the history
  • Loading branch information
jiangyuzhao committed Apr 2, 2019
1 parent 967c829 commit 2c6c46d
Showing 1 changed file with 1 addition and 1 deletion.
2 changes: 1 addition & 1 deletion contents/9_Deep_Deterministic_Policy_Gradient_DDPG/DDPG.py
Original file line number Diff line number Diff line change
Expand Up @@ -138,7 +138,7 @@ def __init__(self, sess, state_dim, action_dim, learning_rate, gamma, replacemen
self.train_op = tf.train.AdamOptimizer(self.lr).minimize(self.loss)

with tf.variable_scope('a_grad'):
self.a_grads = tf.gradients(self.q, a)[0] # tensor of gradients of each sample (None, a_dim)
self.a_grads = tf.gradients(self.q, self.a)[0] # tensor of gradients of each sample (None, a_dim)

if self.replacement['name'] == 'hard':
self.t_replace_counter = 0
Expand Down

0 comments on commit 2c6c46d

Please sign in to comment.