Skip to content

Commit

Permalink
Added Sqeeuze/Excite and BatchNorm recipes.
Browse files Browse the repository at this point in the history
  • Loading branch information
vahidk committed Sep 16, 2017
1 parent e6ac9f3 commit d6becaf
Showing 1 changed file with 73 additions and 1 deletion.
74 changes: 73 additions & 1 deletion README.md
Original file line number Diff line number Diff line change
Expand Up @@ -24,7 +24,8 @@ Table of Contents
- [KL-Divergence](#kld)
- [Make parallel](#make_parallel)
- [Leaky Relu](#leaky_relu)

- [Batch normalization](#batch_norm)
- [Squeeze and excitation](#squeeze_excite)
---

_We aim to gradually expand this series by adding new articles and keep the content up to date with the latest releases of TensorFlow API. If you have suggestions on how to improve this series or find the explanations ambiguous, feel free to create an issue, send patches, or reach out by email._
Expand Down Expand Up @@ -1409,3 +1410,74 @@ def leaky_relu(tensor, alpha=0.1):
"""Computes the leaky rectified linear activation."""
return tf.maximum(x, alpha * x)
```

## Batch normalization <a name="batch_norm"></a>
```python
def batch_normalization(tensor, training=False, epsilon=0.001, momentum=0.9,
fused_batch_norm=False, name=None):
"""Performs batch normalization on given 4-D tensor.
The features are assumed to be in NHWC format. Noe that you need to
run UPDATE_OPS in order for this function to perform correctly, e.g.:
with tf.control_dependencies(tf.get_collection(tf.GraphKeys.UPDATE_OPS)):
train_op = optimizer.minimize(loss)
"""
with tf.variable_scope(name, default_name="batch_normalization"):
channels = tensor.shape.as_list()[-1]
axes = list(range(tensor.shape.ndims - 1))

beta = tf.get_variable(
'beta', channels, initializer=tf.zeros_initializer())
gamma = tf.get_variable(
'gamma', channels, initializer=tf.ones_initializer())

avg_mean = tf.get_variable(
"avg_mean", channels, initializer=tf.zeros_initializer(),
trainable=False)
avg_variance = tf.get_variable(
"avg_variance", channels, initializer=tf.ones_initializer(),
trainable=False)

if training:
if fused_batch_norm:
mean, variance = None, None
else:
mean, variance = tf.nn.moments(tensor, axes=axes)
else:
mean, variance = avg_mean, avg_variance

if fused_batch_norm:
tensor, mean, variance = tf.nn.fused_batch_norm(
tensor, scale=gamma, offset=beta, mean=mean, variance=variance,
epsilon=epsilon, is_training=training)
else:
tensor = tf.nn.batch_normalization(
tensor, mean, variance, beta, gamma, epsilon)

if training:
update_mean = tf.assign(
avg_mean, avg_mean * momentum + mean * (1.0 - momentum))
update_variance = tf.assign(
avg_variance, avg_variance * momentum + variance * (1.0 - momentum))

tf.add_to_collection(tf.GraphKeys.UPDATE_OPS, update_mean)
tf.add_to_collection(tf.GraphKeys.UPDATE_OPS, update_variance)

return tensor
```

## Squeeze and excitation <a name="squeeze_excite"></a>
```python
def squeeze_and_excite(tensor, ratio):
"""Squeeze and excite layer."""
original = tensor
units = tensor.shape.as_list()[-1]
tensor = tf.reduce_mean(tensor, [1, 2], keep_dims=True)
tensor = tf.layers.dense(tensor, units / ratio, use_bias=False)
tensor = tf.nn.relu(tensor)
tensor = tf.layers.dense(tensor, units, use_bias=False)
tensor = tf.nn.sigmoid(tensor)
tensor = original * tensor
return tensor
```

0 comments on commit d6becaf

Please sign in to comment.