Skip to content

Commit

Permalink
Adjust l2 norm penalty in example
Browse files Browse the repository at this point in the history
The reference implementation uses's Tensorflow's l2 norm penalty which  divides the weighting parameter by 2, but Keras's l2 norm penalty implementation doesn't. This brings the example closer to the reference implemenatation.
  • Loading branch information
mawright committed Oct 18, 2018
1 parent ee45632 commit d92b02e
Showing 1 changed file with 1 addition and 1 deletion.
2 changes: 1 addition & 1 deletion examples/gat.py
Original file line number Diff line number Diff line change
Expand Up @@ -20,7 +20,7 @@
F_ = 8 # Output size of first GraphAttention layer
n_attn_heads = 8 # Number of attention heads in first GAT layer
dropout_rate = 0.6 # Dropout rate (between and inside GAT layers)
l2_reg = 5e-4 # Factor for l2 regularization
l2_reg = 5e-4/2 # Factor for l2 regularization
learning_rate = 5e-3 # Learning rate for Adam
epochs = 10000 # Number of training epochs
es_patience = 100 # Patience fot early stopping
Expand Down

0 comments on commit d92b02e

Please sign in to comment.