Skip to content

Commit

Permalink
update multigpu gluon
Browse files Browse the repository at this point in the history
  • Loading branch information
mli committed Oct 20, 2017
1 parent a8555c8 commit c4b76de
Showing 1 changed file with 10 additions and 4 deletions.
14 changes: 10 additions & 4 deletions chapter_gluon-advances/multiple-gpus-gluon.md
Original file line number Diff line number Diff line change
Expand Up @@ -58,15 +58,15 @@ from mxnet import autograd
from time import time
from mxnet import init
def run(num_gpus, batch_size, lr):
def train(num_gpus, batch_size, lr):
train_data, test_data = utils.load_data_fashion_mnist(batch_size)
ctx = [gpu(i) for i in range(num_gpus)]
print('Running on', ctx)
net = utils.resnet18_28(10)
net.initialize(init=init.Xavier(), ctx=ctx)
loss = gluon.loss.SoftmaxCrossEntropyLoss()
trainer = gluon.Trainer(
net.collect_params(),'sgd', {'learning_rate': lr})
Expand Down Expand Up @@ -95,14 +95,20 @@ def run(num_gpus, batch_size, lr):
尝试在单GPU上执行。

```{.python .input}
run(1, 256, .1)
train(1, 256, .1)
```

同样的参数,但使用两个GPU。

```{.python .input}
run(2, 256, .1)
train(2, 256, .1)
```

增大批量值和学习率

```{.python .input}
train(2, 512, .2)
```

## 结论
Expand Down

0 comments on commit c4b76de

Please sign in to comment.